Nov 29 07:05:59 crc systemd[1]: Starting Kubernetes Kubelet... Nov 29 07:05:59 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:05:59 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:06:00 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 29 07:06:01 crc kubenswrapper[4731]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:06:01 crc kubenswrapper[4731]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 29 07:06:01 crc kubenswrapper[4731]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:06:01 crc kubenswrapper[4731]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:06:01 crc kubenswrapper[4731]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 29 07:06:01 crc kubenswrapper[4731]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.611148 4731 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617312 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617381 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617387 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617391 4731 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617395 4731 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617400 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617404 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617409 4731 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617414 4731 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617420 4731 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617425 4731 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617430 4731 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617437 4731 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617441 4731 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617444 4731 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617448 4731 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617452 4731 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617456 4731 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617459 4731 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617472 4731 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617477 4731 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617482 4731 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617486 4731 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617492 4731 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617495 4731 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617499 4731 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617503 4731 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617507 4731 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617512 4731 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617516 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617521 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617525 4731 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617529 4731 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617533 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617537 4731 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617541 4731 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617544 4731 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617548 4731 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617552 4731 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617556 4731 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617559 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617563 4731 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617566 4731 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617570 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617593 4731 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617597 4731 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617601 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617606 4731 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617611 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617615 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617618 4731 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617621 4731 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617625 4731 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617629 4731 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617633 4731 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617641 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617645 4731 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617648 4731 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617652 4731 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617655 4731 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617659 4731 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617662 4731 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617665 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617669 4731 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617672 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617675 4731 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617680 4731 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617685 4731 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617689 4731 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617692 4731 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.617697 4731 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617788 4731 flags.go:64] FLAG: --address="0.0.0.0" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617798 4731 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617812 4731 flags.go:64] FLAG: --anonymous-auth="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617818 4731 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617824 4731 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617829 4731 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617836 4731 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617843 4731 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617848 4731 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617852 4731 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617858 4731 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617863 4731 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617867 4731 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617872 4731 flags.go:64] FLAG: --cgroup-root="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617876 4731 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617881 4731 flags.go:64] FLAG: --client-ca-file="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617884 4731 flags.go:64] FLAG: --cloud-config="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617888 4731 flags.go:64] FLAG: --cloud-provider="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617892 4731 flags.go:64] FLAG: --cluster-dns="[]" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617900 4731 flags.go:64] FLAG: --cluster-domain="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617908 4731 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617913 4731 flags.go:64] FLAG: --config-dir="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617917 4731 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617922 4731 flags.go:64] FLAG: --container-log-max-files="5" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617928 4731 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617932 4731 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617937 4731 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617941 4731 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617945 4731 flags.go:64] FLAG: --contention-profiling="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617949 4731 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617953 4731 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617958 4731 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617962 4731 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617968 4731 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617972 4731 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617976 4731 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617980 4731 flags.go:64] FLAG: --enable-load-reader="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617984 4731 flags.go:64] FLAG: --enable-server="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617989 4731 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.617996 4731 flags.go:64] FLAG: --event-burst="100" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618001 4731 flags.go:64] FLAG: --event-qps="50" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618005 4731 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618009 4731 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618013 4731 flags.go:64] FLAG: --eviction-hard="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618019 4731 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618023 4731 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618027 4731 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618031 4731 flags.go:64] FLAG: --eviction-soft="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618035 4731 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618039 4731 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618043 4731 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618047 4731 flags.go:64] FLAG: --experimental-mounter-path="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618051 4731 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618055 4731 flags.go:64] FLAG: --fail-swap-on="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618059 4731 flags.go:64] FLAG: --feature-gates="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618065 4731 flags.go:64] FLAG: --file-check-frequency="20s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618080 4731 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618086 4731 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618091 4731 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618095 4731 flags.go:64] FLAG: --healthz-port="10248" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618100 4731 flags.go:64] FLAG: --help="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618104 4731 flags.go:64] FLAG: --hostname-override="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618108 4731 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618113 4731 flags.go:64] FLAG: --http-check-frequency="20s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618117 4731 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618121 4731 flags.go:64] FLAG: --image-credential-provider-config="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618125 4731 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618129 4731 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618133 4731 flags.go:64] FLAG: --image-service-endpoint="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618140 4731 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618144 4731 flags.go:64] FLAG: --kube-api-burst="100" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618148 4731 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618152 4731 flags.go:64] FLAG: --kube-api-qps="50" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618156 4731 flags.go:64] FLAG: --kube-reserved="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618160 4731 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618164 4731 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618168 4731 flags.go:64] FLAG: --kubelet-cgroups="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618172 4731 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618181 4731 flags.go:64] FLAG: --lock-file="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618185 4731 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618189 4731 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618194 4731 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618202 4731 flags.go:64] FLAG: --log-json-split-stream="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618207 4731 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618212 4731 flags.go:64] FLAG: --log-text-split-stream="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618217 4731 flags.go:64] FLAG: --logging-format="text" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618222 4731 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618228 4731 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618233 4731 flags.go:64] FLAG: --manifest-url="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618238 4731 flags.go:64] FLAG: --manifest-url-header="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618245 4731 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618250 4731 flags.go:64] FLAG: --max-open-files="1000000" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618263 4731 flags.go:64] FLAG: --max-pods="110" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618268 4731 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618273 4731 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618282 4731 flags.go:64] FLAG: --memory-manager-policy="None" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618287 4731 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618292 4731 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618296 4731 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618300 4731 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618310 4731 flags.go:64] FLAG: --node-status-max-images="50" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618315 4731 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618319 4731 flags.go:64] FLAG: --oom-score-adj="-999" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618323 4731 flags.go:64] FLAG: --pod-cidr="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618327 4731 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618336 4731 flags.go:64] FLAG: --pod-manifest-path="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618340 4731 flags.go:64] FLAG: --pod-max-pids="-1" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618344 4731 flags.go:64] FLAG: --pods-per-core="0" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618348 4731 flags.go:64] FLAG: --port="10250" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618352 4731 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618357 4731 flags.go:64] FLAG: --provider-id="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618361 4731 flags.go:64] FLAG: --qos-reserved="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618365 4731 flags.go:64] FLAG: --read-only-port="10255" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618369 4731 flags.go:64] FLAG: --register-node="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618373 4731 flags.go:64] FLAG: --register-schedulable="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618377 4731 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618386 4731 flags.go:64] FLAG: --registry-burst="10" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618390 4731 flags.go:64] FLAG: --registry-qps="5" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618395 4731 flags.go:64] FLAG: --reserved-cpus="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618400 4731 flags.go:64] FLAG: --reserved-memory="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618406 4731 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618411 4731 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618416 4731 flags.go:64] FLAG: --rotate-certificates="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618420 4731 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618425 4731 flags.go:64] FLAG: --runonce="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618429 4731 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618433 4731 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618440 4731 flags.go:64] FLAG: --seccomp-default="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618449 4731 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618454 4731 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618458 4731 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618462 4731 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618467 4731 flags.go:64] FLAG: --storage-driver-password="root" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618470 4731 flags.go:64] FLAG: --storage-driver-secure="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618475 4731 flags.go:64] FLAG: --storage-driver-table="stats" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618479 4731 flags.go:64] FLAG: --storage-driver-user="root" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618483 4731 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618487 4731 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618491 4731 flags.go:64] FLAG: --system-cgroups="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618495 4731 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618502 4731 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618506 4731 flags.go:64] FLAG: --tls-cert-file="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618510 4731 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618517 4731 flags.go:64] FLAG: --tls-min-version="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618521 4731 flags.go:64] FLAG: --tls-private-key-file="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618525 4731 flags.go:64] FLAG: --topology-manager-policy="none" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618534 4731 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618539 4731 flags.go:64] FLAG: --topology-manager-scope="container" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618543 4731 flags.go:64] FLAG: --v="2" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618549 4731 flags.go:64] FLAG: --version="false" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618555 4731 flags.go:64] FLAG: --vmodule="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618560 4731 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.618568 4731 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621490 4731 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621537 4731 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621547 4731 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621557 4731 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621563 4731 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621586 4731 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621591 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621596 4731 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621601 4731 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621606 4731 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621611 4731 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621620 4731 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621625 4731 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621630 4731 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621635 4731 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621639 4731 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621647 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621652 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621658 4731 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621664 4731 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621669 4731 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621677 4731 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621684 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621692 4731 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621698 4731 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621703 4731 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621707 4731 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621727 4731 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621733 4731 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621738 4731 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621743 4731 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621747 4731 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621752 4731 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621757 4731 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621762 4731 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621767 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621772 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621776 4731 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621780 4731 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621785 4731 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621789 4731 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621793 4731 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621798 4731 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621804 4731 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621809 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621814 4731 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621818 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621822 4731 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621827 4731 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621831 4731 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621835 4731 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621839 4731 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621843 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621847 4731 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621852 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621856 4731 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621860 4731 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621864 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621868 4731 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621873 4731 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621877 4731 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621881 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621885 4731 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621889 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621893 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621898 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621903 4731 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621907 4731 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621911 4731 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621915 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.621919 4731 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.621928 4731 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.631549 4731 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.631626 4731 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631711 4731 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631722 4731 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631727 4731 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631732 4731 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631737 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631742 4731 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631745 4731 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631750 4731 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631755 4731 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631761 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631767 4731 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631770 4731 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631774 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631778 4731 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631782 4731 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631785 4731 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631788 4731 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631792 4731 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631796 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631799 4731 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631803 4731 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631807 4731 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631810 4731 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631814 4731 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631817 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631821 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631824 4731 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631829 4731 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631833 4731 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631837 4731 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631841 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631845 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631851 4731 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631856 4731 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631861 4731 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631866 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631872 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631878 4731 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631883 4731 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631888 4731 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631893 4731 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631897 4731 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631902 4731 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631906 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631909 4731 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631913 4731 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631917 4731 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631921 4731 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631925 4731 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631928 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631932 4731 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631936 4731 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631941 4731 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631945 4731 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631948 4731 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631952 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631955 4731 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631960 4731 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631965 4731 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631969 4731 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631973 4731 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631978 4731 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631981 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631985 4731 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631989 4731 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631993 4731 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.631996 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632000 4731 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632003 4731 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632007 4731 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632011 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.632019 4731 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632141 4731 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632147 4731 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632152 4731 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632156 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632160 4731 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632165 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632170 4731 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632174 4731 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632179 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632183 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632187 4731 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632190 4731 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632195 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632200 4731 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632204 4731 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632209 4731 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632212 4731 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632216 4731 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632220 4731 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632224 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632227 4731 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632231 4731 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632235 4731 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632239 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632244 4731 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632247 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632252 4731 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632256 4731 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632260 4731 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632265 4731 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632269 4731 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632274 4731 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632277 4731 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632281 4731 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632285 4731 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632289 4731 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632293 4731 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632298 4731 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632302 4731 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632307 4731 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632311 4731 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632314 4731 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632318 4731 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632322 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632328 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632332 4731 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632336 4731 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632340 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632343 4731 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632347 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632350 4731 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632354 4731 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632358 4731 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632362 4731 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632365 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632369 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632373 4731 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632376 4731 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632380 4731 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632384 4731 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632389 4731 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632394 4731 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632398 4731 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632402 4731 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632407 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632410 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632414 4731 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632418 4731 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632422 4731 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632426 4731 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.632430 4731 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.632438 4731 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.632934 4731 server.go:940] "Client rotation is on, will bootstrap in background" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.635823 4731 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.635915 4731 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.636486 4731 server.go:997] "Starting client certificate rotation" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.636505 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.636679 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-13 12:49:37.015721865 +0000 UTC Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.636774 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1085h43m35.378951015s for next certificate rotation Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.646451 4731 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.648370 4731 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.663314 4731 log.go:25] "Validated CRI v1 runtime API" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.684925 4731 log.go:25] "Validated CRI v1 image API" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.686822 4731 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.690084 4731 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-29-06-59-15-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.690170 4731 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.707181 4731 manager.go:217] Machine: {Timestamp:2025-11-29 07:06:01.705866438 +0000 UTC m=+0.596227561 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f3d115a6-d015-4b84-85ef-26fa0172b441 BootID:5aaf0a18-6c01-4835-aaaa-2edfd1f90942 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:83:95:df Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:83:95:df Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fe:37:46 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7e:f2:1b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d3:24:12 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cd:82:2b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f2:64:60:58:d5:4a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:09:69:1a:24:20 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.707432 4731 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.707718 4731 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.708330 4731 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.708633 4731 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.708682 4731 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.708960 4731 topology_manager.go:138] "Creating topology manager with none policy" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.708971 4731 container_manager_linux.go:303] "Creating device plugin manager" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.709219 4731 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.709251 4731 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.709725 4731 state_mem.go:36] "Initialized new in-memory state store" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.709822 4731 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.710470 4731 kubelet.go:418] "Attempting to sync node with API server" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.710510 4731 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.710541 4731 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.710554 4731 kubelet.go:324] "Adding apiserver pod source" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.710565 4731 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.712346 4731 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.712754 4731 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.713528 4731 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714142 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714166 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714173 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714180 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714192 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714199 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714206 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714218 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714226 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.714232 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.716990 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.717023 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.717078 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.717218 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.717360 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.717367 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.717528 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.717859 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.717973 4731 server.go:1280] "Started kubelet" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.718394 4731 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.718396 4731 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.719018 4731 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 29 07:06:01 crc systemd[1]: Started Kubernetes Kubelet. Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.728972 4731 server.go:460] "Adding debug handlers to kubelet server" Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.730087 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.57:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c686abfe924d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:06:01.717941457 +0000 UTC m=+0.608302560,LastTimestamp:2025-11-29 07:06:01.717941457 +0000 UTC m=+0.608302560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.731778 4731 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.731831 4731 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.732727 4731 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:06:42.019664909 +0000 UTC Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.732830 4731 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.732913 4731 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.732920 4731 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.733137 4731 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.733381 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="200ms" Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.738135 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.738376 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.739518 4731 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.739541 4731 factory.go:55] Registering systemd factory Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.739554 4731 factory.go:221] Registration of the systemd container factory successfully Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.741457 4731 factory.go:153] Registering CRI-O factory Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.741509 4731 factory.go:221] Registration of the crio container factory successfully Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.741543 4731 factory.go:103] Registering Raw factory Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.741582 4731 manager.go:1196] Started watching for new ooms in manager Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.757693 4731 manager.go:319] Starting recovery of all containers Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758240 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758297 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758311 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758321 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758331 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758340 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758349 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758360 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758374 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758385 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758397 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758408 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758421 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758432 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758442 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758453 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758464 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758473 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758483 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758493 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758503 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758539 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758550 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758559 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758571 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758614 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758636 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758647 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758658 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758668 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758677 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758687 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758698 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758716 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758758 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758776 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758785 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758794 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758803 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758813 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758824 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758883 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758902 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758952 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758968 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758979 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.758988 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759023 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759034 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759043 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759052 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759064 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759086 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759106 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759121 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759136 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759151 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759163 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759175 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759185 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759198 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759209 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759220 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759232 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759244 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759256 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759277 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759292 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759303 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759316 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759334 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759344 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759354 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759364 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759373 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759407 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759418 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759429 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759439 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759452 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759462 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759471 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759479 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759489 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759497 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759507 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759516 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759525 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759536 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759546 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759555 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759565 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759593 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759604 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759619 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759629 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759642 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759652 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759661 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759695 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759706 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759715 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759725 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759735 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.759756 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760543 4731 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760597 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760612 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760623 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760635 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760646 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760657 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760669 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760682 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760694 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760709 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760721 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760734 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760748 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760761 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760772 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760785 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760798 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760812 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760828 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760839 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760851 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760861 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760870 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760881 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760891 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760901 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760910 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760921 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760931 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760940 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760950 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760959 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760968 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760985 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.760995 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761004 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761015 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761025 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761034 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761043 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761053 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761065 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761076 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761085 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761108 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761121 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761135 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761146 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761158 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761168 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761177 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761188 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761197 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761211 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761221 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761231 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761242 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761251 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761261 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761272 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761315 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761329 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761340 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761350 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761358 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761372 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761395 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761410 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761423 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761433 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761442 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761452 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761464 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761479 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761491 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761504 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761516 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761528 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761537 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761546 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761558 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761594 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761616 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761630 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761642 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761653 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761663 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761677 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761688 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761709 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761720 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761732 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761746 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761758 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761770 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761782 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761795 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761807 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761819 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761832 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761845 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761857 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761869 4731 reconstruct.go:97] "Volume reconstruction finished" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.761878 4731 reconciler.go:26] "Reconciler: start to sync state" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.778524 4731 manager.go:324] Recovery completed Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.789648 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.792350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.792417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.792435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.794314 4731 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.794460 4731 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.794499 4731 state_mem.go:36] "Initialized new in-memory state store" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.803978 4731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.805463 4731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.805510 4731 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.805544 4731 kubelet.go:2335] "Starting kubelet main sync loop" Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.805622 4731 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 29 07:06:01 crc kubenswrapper[4731]: W1129 07:06:01.807003 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.807079 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.833326 4731 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.860298 4731 policy_none.go:49] "None policy: Start" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.863388 4731 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.863433 4731 state_mem.go:35] "Initializing new in-memory state store" Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.906531 4731 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.915394 4731 manager.go:334] "Starting Device Plugin manager" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.915500 4731 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.915515 4731 server.go:79] "Starting device plugin registration server" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.916151 4731 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.916171 4731 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.916338 4731 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.916457 4731 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 29 07:06:01 crc kubenswrapper[4731]: I1129 07:06:01.916465 4731 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.925628 4731 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:06:01 crc kubenswrapper[4731]: E1129 07:06:01.938035 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="400ms" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.017229 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.018706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.018750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.018761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.018785 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:06:02 crc kubenswrapper[4731]: E1129 07:06:02.019344 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.107142 4731 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.107358 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.109396 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.109438 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.109450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.109691 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.109982 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.110061 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.110591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.110616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.110627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.110785 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.111008 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.111057 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.111759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.111812 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.111823 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112183 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112192 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112256 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112468 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112626 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.112664 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113180 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113338 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113424 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113454 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.113772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.114068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.114097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.114159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.114252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.114288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.114303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.114354 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.114379 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.115221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.115240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.115253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.166761 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.166830 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.166880 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.166905 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.166926 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.166947 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167034 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167097 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167241 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167329 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167367 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167398 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167483 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167527 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.167663 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.219524 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.221265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.221321 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.221332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.221372 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:06:02 crc kubenswrapper[4731]: E1129 07:06:02.222074 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270149 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270245 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270282 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270312 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270345 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270369 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270391 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270412 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270423 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270477 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270517 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270442 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270485 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270594 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270502 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270616 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270616 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270425 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270641 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270626 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270650 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270683 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270708 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270703 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270727 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270748 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270775 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270791 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270813 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.270937 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: E1129 07:06:02.339609 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="800ms" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.457008 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.477210 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: W1129 07:06:02.490054 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-1050c9d9cf04e33296774fd9750a688bd0636fe89cc0fc7c9e59826a70b01ae9 WatchSource:0}: Error finding container 1050c9d9cf04e33296774fd9750a688bd0636fe89cc0fc7c9e59826a70b01ae9: Status 404 returned error can't find the container with id 1050c9d9cf04e33296774fd9750a688bd0636fe89cc0fc7c9e59826a70b01ae9 Nov 29 07:06:02 crc kubenswrapper[4731]: W1129 07:06:02.497633 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f8bc1763b57f01355ed7173e91f8ad5d8302ad67ac207aef6514f8665bf6ad92 WatchSource:0}: Error finding container f8bc1763b57f01355ed7173e91f8ad5d8302ad67ac207aef6514f8665bf6ad92: Status 404 returned error can't find the container with id f8bc1763b57f01355ed7173e91f8ad5d8302ad67ac207aef6514f8665bf6ad92 Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.508215 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: W1129 07:06:02.525974 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-33b297aefc06de1bf91ca77b7dd5ff094966f2af62cd4f460fb31c313004216e WatchSource:0}: Error finding container 33b297aefc06de1bf91ca77b7dd5ff094966f2af62cd4f460fb31c313004216e: Status 404 returned error can't find the container with id 33b297aefc06de1bf91ca77b7dd5ff094966f2af62cd4f460fb31c313004216e Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.539896 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.546381 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:06:02 crc kubenswrapper[4731]: W1129 07:06:02.547789 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:02 crc kubenswrapper[4731]: E1129 07:06:02.547920 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:02 crc kubenswrapper[4731]: W1129 07:06:02.551513 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:02 crc kubenswrapper[4731]: E1129 07:06:02.551592 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.622790 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.719405 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.733410 4731 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:25:50.50074413 +0000 UTC Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.741430 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.741553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.741566 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.741631 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:06:02 crc kubenswrapper[4731]: E1129 07:06:02.742427 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Nov 29 07:06:02 crc kubenswrapper[4731]: W1129 07:06:02.751801 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c9b5850e54c3c3a3fdbc1507bb25f1e174a7207d3fe1faac3d303eb2811c0cd7 WatchSource:0}: Error finding container c9b5850e54c3c3a3fdbc1507bb25f1e174a7207d3fe1faac3d303eb2811c0cd7: Status 404 returned error can't find the container with id c9b5850e54c3c3a3fdbc1507bb25f1e174a7207d3fe1faac3d303eb2811c0cd7 Nov 29 07:06:02 crc kubenswrapper[4731]: W1129 07:06:02.759204 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7be39ccb9dfd222638b492595552927dc0c3d04d89aec55796a4a013f4444bc7 WatchSource:0}: Error finding container 7be39ccb9dfd222638b492595552927dc0c3d04d89aec55796a4a013f4444bc7: Status 404 returned error can't find the container with id 7be39ccb9dfd222638b492595552927dc0c3d04d89aec55796a4a013f4444bc7 Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.810630 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8bc1763b57f01355ed7173e91f8ad5d8302ad67ac207aef6514f8665bf6ad92"} Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.816057 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1050c9d9cf04e33296774fd9750a688bd0636fe89cc0fc7c9e59826a70b01ae9"} Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.817344 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7be39ccb9dfd222638b492595552927dc0c3d04d89aec55796a4a013f4444bc7"} Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.820737 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c9b5850e54c3c3a3fdbc1507bb25f1e174a7207d3fe1faac3d303eb2811c0cd7"} Nov 29 07:06:02 crc kubenswrapper[4731]: I1129 07:06:02.823246 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"33b297aefc06de1bf91ca77b7dd5ff094966f2af62cd4f460fb31c313004216e"} Nov 29 07:06:02 crc kubenswrapper[4731]: W1129 07:06:02.862633 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:02 crc kubenswrapper[4731]: E1129 07:06:02.862781 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:03 crc kubenswrapper[4731]: E1129 07:06:03.141216 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="1.6s" Nov 29 07:06:03 crc kubenswrapper[4731]: W1129 07:06:03.276341 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:03 crc kubenswrapper[4731]: E1129 07:06:03.276464 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.542726 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.545249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.545296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.545311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.545341 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:06:03 crc kubenswrapper[4731]: E1129 07:06:03.545970 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.719276 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.734793 4731 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:37:20.130337662 +0000 UTC Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.734882 4731 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 933h31m16.395458398s for next certificate rotation Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.829361 4731 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573" exitCode=0 Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.829536 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.829517 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573"} Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.830601 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.830647 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.830661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.833884 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2"} Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.833931 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58"} Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.836293 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8" exitCode=0 Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.836354 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8"} Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.836429 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.837205 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.837232 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.837250 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.838305 4731 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cec7888f2055543d9210eb88271cf858bb1c6c9daafcf8d5eebe5ae66b140be3" exitCode=0 Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.838380 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cec7888f2055543d9210eb88271cf858bb1c6c9daafcf8d5eebe5ae66b140be3"} Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.838543 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.838805 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.839715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.839743 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.839769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.839780 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.839747 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.839891 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.841439 4731 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9" exitCode=0 Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.841506 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9"} Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.842235 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.843690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.843729 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:03 crc kubenswrapper[4731]: I1129 07:06:03.843745 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:04 crc kubenswrapper[4731]: E1129 07:06:04.599208 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.57:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c686abfe924d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:06:01.717941457 +0000 UTC m=+0.608302560,LastTimestamp:2025-11-29 07:06:01.717941457 +0000 UTC m=+0.608302560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.753724 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:04 crc kubenswrapper[4731]: E1129 07:06:04.755247 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="3.2s" Nov 29 07:06:04 crc kubenswrapper[4731]: W1129 07:06:04.799113 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:04 crc kubenswrapper[4731]: E1129 07:06:04.799253 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.848803 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4"} Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.848905 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a"} Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.849059 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.850487 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.850547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.850563 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.854809 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9"} Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.854907 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d"} Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.858853 4731 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6323b45d0e45e84e5c419dded429001a3c2a3bfb950d1069c12a840b07c5d581" exitCode=0 Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.858976 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6323b45d0e45e84e5c419dded429001a3c2a3bfb950d1069c12a840b07c5d581"} Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.859079 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.862799 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.862840 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.862851 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.865393 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead"} Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.865545 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.867349 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.867387 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.867398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.872188 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5"} Nov 29 07:06:04 crc kubenswrapper[4731]: I1129 07:06:04.872262 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04"} Nov 29 07:06:04 crc kubenswrapper[4731]: W1129 07:06:04.888846 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:04 crc kubenswrapper[4731]: E1129 07:06:04.888980 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.097300 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.148436 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.179750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.179806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.179818 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.179850 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:06:05 crc kubenswrapper[4731]: E1129 07:06:05.180675 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Nov 29 07:06:05 crc kubenswrapper[4731]: W1129 07:06:05.698942 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:05 crc kubenswrapper[4731]: E1129 07:06:05.699099 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.719265 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.939725 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36"} Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.939804 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38"} Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.939818 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81"} Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.939970 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.940977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.941017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.941027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.942274 4731 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="084da6a5ab5f96212c90ee40656376d78e2d377cfd9a7f0b874f085fead3cbfc" exitCode=0 Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.942366 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.942368 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"084da6a5ab5f96212c90ee40656376d78e2d377cfd9a7f0b874f085fead3cbfc"} Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.943389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.943428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.943443 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.947903 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.948479 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.948855 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a"} Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.948956 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.949756 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.949784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.949810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.950375 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.950407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.950417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.951007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.951028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:05 crc kubenswrapper[4731]: I1129 07:06:05.951036 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:06 crc kubenswrapper[4731]: W1129 07:06:06.332043 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:06 crc kubenswrapper[4731]: E1129 07:06:06.332194 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.719799 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.953254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1b45105b8ffd0958e82e91f4a252fd55648e7df7c3e0adaabfef3fac21b40d89"} Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.953323 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d4a87ec8979104dcc066b1336256b9940cbccd2aeb5eaba7e9046110386b43c"} Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.953379 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.953415 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.953435 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.953483 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.953919 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.956036 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.956120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.956141 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.956115 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.956187 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.956201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.956366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.956389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:06 crc kubenswrapper[4731]: I1129 07:06:06.957427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.403898 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.989169 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.989237 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.989832 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.990138 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"930e25b3b5d987a7655d880335384b628a769849dd65efe24d54c9478e62ff59"} Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.990190 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a42a8bbcb40e859357977aedd6999a512573c8d8e76789eed2d7a9e25603b292"} Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.990205 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9f69b0a37a46439ddeb91cbf2b55a55f7a357cee5a791f0313caa80273f7d974"} Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.990301 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.991116 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.991147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.991156 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.991986 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.992012 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.992023 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.992424 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.992450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:07 crc kubenswrapper[4731]: I1129 07:06:07.992460 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.097728 4731 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.097845 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.335009 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.335299 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.336828 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.336886 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.336899 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.359041 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.381922 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.383493 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.383557 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.383596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.383642 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.414803 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.874135 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.991686 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.991750 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.991686 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.992956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.992956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.993029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.993046 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.992995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.993097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.993383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.993420 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:08 crc kubenswrapper[4731]: I1129 07:06:08.993436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:09 crc kubenswrapper[4731]: I1129 07:06:09.994450 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:06:09 crc kubenswrapper[4731]: I1129 07:06:09.994605 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:09 crc kubenswrapper[4731]: I1129 07:06:09.996535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:09 crc kubenswrapper[4731]: I1129 07:06:09.996644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:09 crc kubenswrapper[4731]: I1129 07:06:09.996664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:10 crc kubenswrapper[4731]: I1129 07:06:10.181302 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:10 crc kubenswrapper[4731]: I1129 07:06:10.181605 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:10 crc kubenswrapper[4731]: I1129 07:06:10.183082 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:10 crc kubenswrapper[4731]: I1129 07:06:10.183128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:10 crc kubenswrapper[4731]: I1129 07:06:10.183142 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:11 crc kubenswrapper[4731]: I1129 07:06:11.704210 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:11 crc kubenswrapper[4731]: I1129 07:06:11.704413 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:11 crc kubenswrapper[4731]: I1129 07:06:11.705863 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:11 crc kubenswrapper[4731]: I1129 07:06:11.705906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:11 crc kubenswrapper[4731]: I1129 07:06:11.705916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:11 crc kubenswrapper[4731]: E1129 07:06:11.925907 4731 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:06:12 crc kubenswrapper[4731]: I1129 07:06:12.024709 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 29 07:06:12 crc kubenswrapper[4731]: I1129 07:06:12.024930 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:12 crc kubenswrapper[4731]: I1129 07:06:12.026365 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:12 crc kubenswrapper[4731]: I1129 07:06:12.026422 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:12 crc kubenswrapper[4731]: I1129 07:06:12.026436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:16 crc kubenswrapper[4731]: I1129 07:06:16.520366 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 29 07:06:16 crc kubenswrapper[4731]: I1129 07:06:16.520643 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:16 crc kubenswrapper[4731]: I1129 07:06:16.522068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:16 crc kubenswrapper[4731]: I1129 07:06:16.522114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:16 crc kubenswrapper[4731]: I1129 07:06:16.522124 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.170420 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.170783 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.172442 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.172495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.172510 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.186440 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.404328 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.404462 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:06:17 crc kubenswrapper[4731]: I1129 07:06:17.742142 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:06:17 crc kubenswrapper[4731]: E1129 07:06:17.957042 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 29 07:06:18 crc kubenswrapper[4731]: I1129 07:06:18.015340 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:18 crc kubenswrapper[4731]: I1129 07:06:18.017442 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:18 crc kubenswrapper[4731]: I1129 07:06:18.017552 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:18 crc kubenswrapper[4731]: I1129 07:06:18.017590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:18 crc kubenswrapper[4731]: I1129 07:06:18.098374 4731 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:06:18 crc kubenswrapper[4731]: I1129 07:06:18.098485 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:06:18 crc kubenswrapper[4731]: E1129 07:06:18.384635 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 29 07:06:18 crc kubenswrapper[4731]: W1129 07:06:18.735115 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:06:18 crc kubenswrapper[4731]: I1129 07:06:18.735248 4731 trace.go:236] Trace[1229523584]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:06:08.733) (total time: 10001ms): Nov 29 07:06:18 crc kubenswrapper[4731]: Trace[1229523584]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:06:18.735) Nov 29 07:06:18 crc kubenswrapper[4731]: Trace[1229523584]: [10.001412046s] [10.001412046s] END Nov 29 07:06:18 crc kubenswrapper[4731]: E1129 07:06:18.735278 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 07:06:18 crc kubenswrapper[4731]: W1129 07:06:18.780935 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:06:18 crc kubenswrapper[4731]: I1129 07:06:18.781075 4731 trace.go:236] Trace[1219078894]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:06:08.779) (total time: 10001ms): Nov 29 07:06:18 crc kubenswrapper[4731]: Trace[1219078894]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:06:18.780) Nov 29 07:06:18 crc kubenswrapper[4731]: Trace[1219078894]: [10.001675693s] [10.001675693s] END Nov 29 07:06:18 crc kubenswrapper[4731]: E1129 07:06:18.781104 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 07:06:20 crc kubenswrapper[4731]: I1129 07:06:20.208397 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 29 07:06:20 crc kubenswrapper[4731]: I1129 07:06:20.208499 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 29 07:06:21 crc kubenswrapper[4731]: I1129 07:06:21.709546 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:21 crc kubenswrapper[4731]: I1129 07:06:21.710677 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:21 crc kubenswrapper[4731]: I1129 07:06:21.712026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:21 crc kubenswrapper[4731]: I1129 07:06:21.712145 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:21 crc kubenswrapper[4731]: I1129 07:06:21.712239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:21 crc kubenswrapper[4731]: E1129 07:06:21.926689 4731 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:06:22 crc kubenswrapper[4731]: I1129 07:06:22.413097 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]log ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]etcd ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/generic-apiserver-start-informers ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-filter ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-apiextensions-informers ok Nov 29 07:06:22 crc kubenswrapper[4731]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Nov 29 07:06:22 crc kubenswrapper[4731]: [-]poststarthook/crd-informer-synced failed: reason withheld Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-system-namespaces-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 29 07:06:22 crc kubenswrapper[4731]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/bootstrap-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/start-kube-aggregator-informers ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/apiservice-registration-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/apiservice-discovery-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]autoregister-completion ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/apiservice-openapi-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 29 07:06:22 crc kubenswrapper[4731]: livez check failed Nov 29 07:06:22 crc kubenswrapper[4731]: I1129 07:06:22.413237 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:06:24 crc kubenswrapper[4731]: I1129 07:06:24.784876 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:24 crc kubenswrapper[4731]: I1129 07:06:24.786506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:24 crc kubenswrapper[4731]: I1129 07:06:24.786613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:24 crc kubenswrapper[4731]: I1129 07:06:24.786631 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:24 crc kubenswrapper[4731]: I1129 07:06:24.786676 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:06:24 crc kubenswrapper[4731]: E1129 07:06:24.791903 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.194275 4731 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.194331 4731 trace.go:236] Trace[1082710712]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:06:10.343) (total time: 14850ms): Nov 29 07:06:25 crc kubenswrapper[4731]: Trace[1082710712]: ---"Objects listed" error: 14850ms (07:06:25.194) Nov 29 07:06:25 crc kubenswrapper[4731]: Trace[1082710712]: [14.850587373s] [14.850587373s] END Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.194365 4731 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.196074 4731 trace.go:236] Trace[86158476]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:06:11.515) (total time: 13680ms): Nov 29 07:06:25 crc kubenswrapper[4731]: Trace[86158476]: ---"Objects listed" error: 13680ms (07:06:25.196) Nov 29 07:06:25 crc kubenswrapper[4731]: Trace[86158476]: [13.68066459s] [13.68066459s] END Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.196095 4731 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.295496 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.300340 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.348485 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.348679 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.349162 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.349246 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.746666 4731 apiserver.go:52] "Watching apiserver" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.750108 4731 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.750613 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.751127 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.751315 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:25 crc kubenswrapper[4731]: E1129 07:06:25.751454 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.751689 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.751761 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.751831 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:25 crc kubenswrapper[4731]: E1129 07:06:25.751829 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:25 crc kubenswrapper[4731]: E1129 07:06:25.751897 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.752039 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.754643 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.754787 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.754716 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.755064 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.755136 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.755276 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.756233 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.756623 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.757313 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.787428 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.806389 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.832448 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.840413 4731 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.846439 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.861807 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.877622 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.894833 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902251 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902315 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902349 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902377 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902410 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902443 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902468 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902504 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902816 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902857 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902908 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902949 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902945 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.902979 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903075 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903115 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903142 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903172 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903199 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903225 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903250 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903278 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903305 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903330 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903354 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903376 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903400 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903428 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903453 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903479 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903507 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903532 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903559 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903615 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903644 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903669 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903717 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903752 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903788 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903816 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903842 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903866 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903892 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903921 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903944 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903968 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903995 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904019 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904047 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904073 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904104 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904130 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904153 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904180 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904213 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904236 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904260 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904284 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904307 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904333 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904358 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904382 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904431 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904459 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904488 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904515 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904544 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904593 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904622 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904652 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904681 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904819 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904858 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904889 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904915 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904963 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904987 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905010 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905035 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905065 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905094 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905119 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905193 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905220 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905245 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905270 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905293 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905322 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905346 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905370 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905396 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905421 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905445 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905468 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905493 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905515 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905542 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905589 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905615 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905641 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905669 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905694 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905717 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905745 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905770 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905798 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905823 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905849 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905871 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905898 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905921 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905946 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905979 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906007 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906031 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906058 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906088 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906115 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906139 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906163 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906193 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906218 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906244 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906271 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906296 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906320 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906345 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906370 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906400 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906423 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906449 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906475 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907511 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907606 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907639 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907673 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907715 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907748 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907783 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907815 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907856 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907884 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907916 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907953 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907985 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908065 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908093 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908120 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908148 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908191 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908239 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908269 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908298 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908326 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908355 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908381 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908406 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908434 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908462 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908488 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908514 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908545 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908591 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908617 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908641 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908671 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908701 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908732 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.917559 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903159 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.923624 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.923995 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.924117 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.924442 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.924676 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.924688 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903178 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.924997 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903412 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903426 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903435 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903481 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903550 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903649 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903675 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903707 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903722 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903842 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904010 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904072 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904067 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904251 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904329 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904378 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904504 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904660 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904698 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904855 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904876 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.904968 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905240 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905290 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905354 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905534 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905553 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905705 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905822 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.905937 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906517 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.906887 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907110 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907158 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907371 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907470 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907649 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907709 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.907917 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908189 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908501 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: E1129 07:06:25.908777 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:06:26.408733791 +0000 UTC m=+25.299094894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908774 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.908972 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.909638 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.909767 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.909892 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.909943 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.909983 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910280 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910305 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910316 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910697 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910844 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910859 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910859 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910904 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910918 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.911068 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.911098 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.911336 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.911405 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.910957 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.915502 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.915728 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.915978 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.916120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.916615 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.917038 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.917180 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.917410 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.917805 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.918675 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.918782 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.919032 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.919040 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.919527 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.919720 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.920000 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.920067 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.920375 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.920726 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.920806 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.921299 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.921478 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.922159 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.922737 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.925359 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.925661 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926141 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926151 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926335 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926513 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926536 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926597 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926637 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926673 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926706 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926738 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926768 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926782 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926800 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926830 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926861 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926912 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926940 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.926973 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927003 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927027 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927055 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927079 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927086 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927105 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927132 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927155 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927181 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927201 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927220 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927237 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927283 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927296 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927317 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927337 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927356 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927416 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927443 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927469 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927499 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927534 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927556 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927608 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927618 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927635 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927662 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927702 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927723 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927748 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927773 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927797 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927909 4731 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927923 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927937 4731 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927951 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927964 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.927986 4731 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.928034 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.928507 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.928919 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.929200 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.930215 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.903159 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.930603 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.930857 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.931208 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.931637 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.931635 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.931794 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.932128 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.932176 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.932214 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.932507 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.932677 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.932873 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.933076 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.933115 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.933320 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.933764 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.934892 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.935022 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.935192 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.935299 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.935615 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.935677 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.935512 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.935944 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.936133 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.936336 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.936481 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.936620 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.936844 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.937297 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.937383 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.937815 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.939040 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.939480 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.939898 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.940238 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.940515 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:25 crc kubenswrapper[4731]: I1129 07:06:25.941094 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.332165 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.332366 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.332771 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.332970 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.333453 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.333908 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.333931 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.333944 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.334173 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.334382 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.334457 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.334619 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.334931 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.335450 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.333545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.333632 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.336031 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.336127 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.336189 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.336418 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.336584 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:26.836548867 +0000 UTC m=+25.726909970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337017 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337081 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337106 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337123 4731 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337140 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337154 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337173 4731 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337184 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337189 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337195 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337254 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337274 4731 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337288 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337299 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337310 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337322 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337332 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337343 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337397 4731 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337411 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337425 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337438 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337193 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337207 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337474 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.336439 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.337742 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.338281 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.338417 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.338481 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.338625 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.338922 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.339005 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.339081 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.339442 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.339525 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.339957 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.339997 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.340101 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:26.840079259 +0000 UTC m=+25.730440362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340203 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340224 4731 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340244 4731 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340261 4731 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340275 4731 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340289 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340304 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340319 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340335 4731 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340350 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340366 4731 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340380 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340395 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340410 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340424 4731 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340442 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340458 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340474 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340490 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340505 4731 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340520 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340537 4731 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340558 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340651 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340670 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340686 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340700 4731 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340713 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340728 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340741 4731 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340756 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340772 4731 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340788 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340802 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340818 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340833 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340846 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340860 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340874 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340887 4731 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340899 4731 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340913 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340927 4731 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340940 4731 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340955 4731 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340970 4731 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340984 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340999 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341013 4731 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.340929 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341045 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341059 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341075 4731 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341091 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341107 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341120 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341134 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341147 4731 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341160 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341173 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341186 4731 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341198 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341210 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341224 4731 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341236 4731 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341248 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341261 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341274 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341286 4731 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341300 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341315 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341281 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341547 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341681 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341759 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341778 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341813 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341830 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341857 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341880 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341892 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341905 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341917 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341944 4731 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341965 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341981 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.341995 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342008 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342023 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342036 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342050 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342062 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342075 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342088 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342101 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342114 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342125 4731 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342138 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342153 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342164 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342176 4731 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342189 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342205 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342255 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342271 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342286 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342300 4731 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342313 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342326 4731 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342345 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342358 4731 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342372 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342386 4731 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342401 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342414 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342427 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342441 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342455 4731 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342468 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342481 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342496 4731 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342510 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342525 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342539 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342553 4731 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342637 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342653 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342668 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342684 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342698 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342716 4731 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342732 4731 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342747 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342760 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342775 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342788 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342802 4731 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.342816 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.343206 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.346081 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.346315 4731 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.347126 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.349413 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.351520 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.352151 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.357336 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.363936 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.364230 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.364583 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.364583 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.365496 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.366492 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.375058 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.375290 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.380725 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.380753 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.380770 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.380847 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:26.880821317 +0000 UTC m=+25.771182420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.380913 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.381324 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.382954 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36" exitCode=255 Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.383030 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36"} Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.394410 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.401143 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.402949 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.404275 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.406999 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.415193 4731 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.430060 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.430129 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.430152 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.430257 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:26.930215156 +0000 UTC m=+25.820576259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.438898 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.445989 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446089 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446153 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446165 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446177 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446187 4731 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446199 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446210 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446219 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446229 4731 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446239 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446249 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446260 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446268 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446279 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446289 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446301 4731 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446311 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446321 4731 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446331 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446341 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446353 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446365 4731 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446376 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446387 4731 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446398 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446408 4731 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446418 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446430 4731 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446440 4731 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446450 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.446494 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.446609 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:06:27.44658515 +0000 UTC m=+26.336946253 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.458024 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.458263 4731 scope.go:117] "RemoveContainer" containerID="1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.470029 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.490522 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.509589 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.532931 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.549737 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.560525 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.574665 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.667135 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:06:26 crc kubenswrapper[4731]: W1129 07:06:26.678670 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-b12013cb2963ce40c26069962e3213dfd0996885e7763cdd4cf12d72349b864a WatchSource:0}: Error finding container b12013cb2963ce40c26069962e3213dfd0996885e7763cdd4cf12d72349b864a: Status 404 returned error can't find the container with id b12013cb2963ce40c26069962e3213dfd0996885e7763cdd4cf12d72349b864a Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.679510 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.849217 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.849282 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.849402 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.849456 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.849474 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:27.849455904 +0000 UTC m=+26.739817007 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.849612 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:27.849588548 +0000 UTC m=+26.739949651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.950437 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:26 crc kubenswrapper[4731]: I1129 07:06:26.950527 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.950666 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.950701 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.950725 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.950862 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:27.950824066 +0000 UTC m=+26.841185169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.950860 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.950917 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.950935 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:26 crc kubenswrapper[4731]: E1129 07:06:26.951032 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:27.951002042 +0000 UTC m=+26.841363315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:26 crc kubenswrapper[4731]: W1129 07:06:26.977157 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4cb438e1c24b32aad592708798844bfd9d20138e18008d54d6ebe3236a5e9158 WatchSource:0}: Error finding container 4cb438e1c24b32aad592708798844bfd9d20138e18008d54d6ebe3236a5e9158: Status 404 returned error can't find the container with id 4cb438e1c24b32aad592708798844bfd9d20138e18008d54d6ebe3236a5e9158 Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.385757 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b"} Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.385816 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b12013cb2963ce40c26069962e3213dfd0996885e7763cdd4cf12d72349b864a"} Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.387486 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a"} Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.387523 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"88a9d421f482d0657483d20775c84db06466cec17e9db8aafd49d192ba7b6656"} Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.399632 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.408086 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5"} Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.408257 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.409733 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4cb438e1c24b32aad592708798844bfd9d20138e18008d54d6ebe3236a5e9158"} Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.410553 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.411655 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.493606 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.494086 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:27 crc kubenswrapper[4731]: E1129 07:06:27.494292 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:06:29.494242605 +0000 UTC m=+28.384603848 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.529745 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.543735 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.556823 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.643061 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.659001 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.671067 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.693578 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.749106 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.777020 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.791712 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.806248 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.806351 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:27 crc kubenswrapper[4731]: E1129 07:06:27.806422 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.806490 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:27 crc kubenswrapper[4731]: E1129 07:06:27.806526 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.806596 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: E1129 07:06:27.806701 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.810600 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.811231 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.812110 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.812948 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.814716 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.815296 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.815990 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.816999 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.817802 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.818813 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.819393 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.821010 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.822009 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.822662 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.823966 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.824120 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.824694 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.825807 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.826249 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.826872 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.828087 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.828527 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.832790 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.833234 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.834382 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.834987 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.835690 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.837090 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.837591 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.837861 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.838534 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.839069 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.839995 4731 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.840123 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.841840 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.842742 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.843192 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.844827 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.845489 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.846505 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.847231 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.848285 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.848791 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.849902 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.850496 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.851222 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.851495 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.852173 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.853077 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.853630 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.854692 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.855223 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.856084 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.856549 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.857398 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.858358 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.859407 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.949809 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:27 crc kubenswrapper[4731]: I1129 07:06:27.949886 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:27 crc kubenswrapper[4731]: E1129 07:06:27.949988 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:27 crc kubenswrapper[4731]: E1129 07:06:27.950021 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:27 crc kubenswrapper[4731]: E1129 07:06:27.950084 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:29.950058121 +0000 UTC m=+28.840419224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:27 crc kubenswrapper[4731]: E1129 07:06:27.950127 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:29.950108773 +0000 UTC m=+28.840469876 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:28 crc kubenswrapper[4731]: I1129 07:06:28.050591 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:28 crc kubenswrapper[4731]: I1129 07:06:28.050650 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:28 crc kubenswrapper[4731]: E1129 07:06:28.050842 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:28 crc kubenswrapper[4731]: E1129 07:06:28.050864 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:28 crc kubenswrapper[4731]: E1129 07:06:28.050901 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:28 crc kubenswrapper[4731]: E1129 07:06:28.050973 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:30.05095536 +0000 UTC m=+28.941316463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:28 crc kubenswrapper[4731]: E1129 07:06:28.050985 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:28 crc kubenswrapper[4731]: E1129 07:06:28.051039 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:28 crc kubenswrapper[4731]: E1129 07:06:28.051055 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:28 crc kubenswrapper[4731]: E1129 07:06:28.051143 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:30.051116125 +0000 UTC m=+28.941477408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:28 crc kubenswrapper[4731]: I1129 07:06:28.364876 4731 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 07:06:28 crc kubenswrapper[4731]: I1129 07:06:28.522522 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480"} Nov 29 07:06:28 crc kubenswrapper[4731]: I1129 07:06:28.537620 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:28 crc kubenswrapper[4731]: I1129 07:06:28.956095 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:28Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:29 crc kubenswrapper[4731]: I1129 07:06:29.470658 4731 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 07:06:29 crc kubenswrapper[4731]: I1129 07:06:29.481816 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:29 crc kubenswrapper[4731]: I1129 07:06:29.501828 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:29 crc kubenswrapper[4731]: E1129 07:06:29.502038 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:06:33.502021956 +0000 UTC m=+32.392383059 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:06:29 crc kubenswrapper[4731]: I1129 07:06:29.702638 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:29 crc kubenswrapper[4731]: I1129 07:06:29.759019 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:29 crc kubenswrapper[4731]: I1129 07:06:29.808633 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:29 crc kubenswrapper[4731]: E1129 07:06:29.808931 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:29 crc kubenswrapper[4731]: I1129 07:06:29.809081 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:29 crc kubenswrapper[4731]: E1129 07:06:29.809210 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:29 crc kubenswrapper[4731]: I1129 07:06:29.809338 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:29 crc kubenswrapper[4731]: E1129 07:06:29.809462 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.005345 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.005432 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.005531 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.005536 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.005628 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:34.005606304 +0000 UTC m=+32.895967407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.005648 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:34.005640075 +0000 UTC m=+32.896001178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.013780 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.071156 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.091470 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.106420 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.106530 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.106606 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.106785 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.106806 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.106820 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.106877 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:34.106856753 +0000 UTC m=+32.997217856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.107006 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.107035 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.107051 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:30 crc kubenswrapper[4731]: E1129 07:06:30.107123 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:34.10709815 +0000 UTC m=+32.997459503 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.131226 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.152961 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.181432 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.207156 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.506600 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.561804 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec"} Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.645528 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.950459 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.959926 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-n6mtz"] Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.960341 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n6mtz" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.961045 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bbjx\" (UniqueName: \"kubernetes.io/projected/f9dc0aca-1039-4a30-a83e-48bd320d0eae-kube-api-access-2bbjx\") pod \"node-resolver-n6mtz\" (UID: \"f9dc0aca-1039-4a30-a83e-48bd320d0eae\") " pod="openshift-dns/node-resolver-n6mtz" Nov 29 07:06:30 crc kubenswrapper[4731]: I1129 07:06:30.961121 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f9dc0aca-1039-4a30-a83e-48bd320d0eae-hosts-file\") pod \"node-resolver-n6mtz\" (UID: \"f9dc0aca-1039-4a30-a83e-48bd320d0eae\") " pod="openshift-dns/node-resolver-n6mtz" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.110058 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.110084 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bbjx\" (UniqueName: \"kubernetes.io/projected/f9dc0aca-1039-4a30-a83e-48bd320d0eae-kube-api-access-2bbjx\") pod \"node-resolver-n6mtz\" (UID: \"f9dc0aca-1039-4a30-a83e-48bd320d0eae\") " pod="openshift-dns/node-resolver-n6mtz" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.110215 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f9dc0aca-1039-4a30-a83e-48bd320d0eae-hosts-file\") pod \"node-resolver-n6mtz\" (UID: \"f9dc0aca-1039-4a30-a83e-48bd320d0eae\") " pod="openshift-dns/node-resolver-n6mtz" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.110278 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f9dc0aca-1039-4a30-a83e-48bd320d0eae-hosts-file\") pod \"node-resolver-n6mtz\" (UID: \"f9dc0aca-1039-4a30-a83e-48bd320d0eae\") " pod="openshift-dns/node-resolver-n6mtz" Nov 29 07:06:31 crc kubenswrapper[4731]: W1129 07:06:31.110829 4731 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.110891 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.111247 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.507337 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bbjx\" (UniqueName: \"kubernetes.io/projected/f9dc0aca-1039-4a30-a83e-48bd320d0eae-kube-api-access-2bbjx\") pod \"node-resolver-n6mtz\" (UID: \"f9dc0aca-1039-4a30-a83e-48bd320d0eae\") " pod="openshift-dns/node-resolver-n6mtz" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.523501 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.779745 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.792329 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.795085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.795151 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.795169 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.795300 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.806245 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.806284 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.806438 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.806721 4731 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.806928 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.807058 4731 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.807086 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.807145 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.808508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.808544 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.808555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.808591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.808607 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:31Z","lastTransitionTime":"2025-11-29T07:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.824178 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.832204 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.836706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.836747 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.836757 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.836777 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.836789 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:31Z","lastTransitionTime":"2025-11-29T07:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.840738 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.851100 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.855531 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.855604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.855618 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.855644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.855661 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:31Z","lastTransitionTime":"2025-11-29T07:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.858275 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.873360 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.874680 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.878608 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.878649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.878671 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.878690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.878703 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:31Z","lastTransitionTime":"2025-11-29T07:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.893554 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.894707 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.901050 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.901114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.901126 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.901144 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.901155 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:31Z","lastTransitionTime":"2025-11-29T07:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.913163 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.916430 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: E1129 07:06:31.916541 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.918625 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.918681 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.918694 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.918715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.918731 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:31Z","lastTransitionTime":"2025-11-29T07:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.934896 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.948931 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.967646 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:31 crc kubenswrapper[4731]: I1129 07:06:31.986766 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.008776 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.020885 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.021100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.021135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.021143 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.021160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.021169 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:32Z","lastTransitionTime":"2025-11-29T07:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.036914 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.052076 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.071102 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.077040 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5rsbt"] Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.077442 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rscr8"] Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.077637 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.077747 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7sc4p"] Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.078090 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.078618 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4t5j"] Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.078762 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.079431 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.080449 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 07:06:32 crc kubenswrapper[4731]: W1129 07:06:32.081675 4731 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 29 07:06:32 crc kubenswrapper[4731]: E1129 07:06:32.081745 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.081837 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.082114 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.082247 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.083097 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.083312 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.083343 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.083612 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.083724 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.083911 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.084230 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.084261 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.084483 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.084588 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.084636 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.084771 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.085697 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.086432 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.090042 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.113138 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.123377 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.123442 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.123454 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.123485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.123499 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:32Z","lastTransitionTime":"2025-11-29T07:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.134477 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.159360 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169290 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-etc-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169360 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169382 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-bin\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169401 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-system-cni-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169419 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-kubelet\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169440 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-netns\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169516 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-k8s-cni-cncf-io\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169637 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-systemd\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169673 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-node-log\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169697 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-os-release\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169719 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-hostroot\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169744 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-config\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169772 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-env-overrides\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169799 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-netns\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169825 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-socket-dir-parent\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169845 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-conf-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169892 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-cni-bin\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169915 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-etc-kubernetes\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169937 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-slash\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169962 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.169986 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-multus-certs\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170010 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-script-lib\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170036 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-cni-multus\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170061 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-var-lib-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170153 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170240 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-daemon-config\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170264 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-kubelet\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170279 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-netd\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170297 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvzl\" (UniqueName: \"kubernetes.io/projected/7d4585c4-ac4a-4268-b25e-47509c17cfe2-kube-api-access-rnvzl\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170325 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2qj\" (UniqueName: \"kubernetes.io/projected/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-kube-api-access-9z2qj\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170348 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-systemd-units\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170377 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovn-node-metrics-cert\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170394 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-cni-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170413 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-ovn\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170435 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-log-socket\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170451 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-cnibin\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.170468 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-cni-binary-copy\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.173982 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.189434 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327285 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-config\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327334 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-env-overrides\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327359 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-conf-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327394 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-netns\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327411 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-socket-dir-parent\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327443 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shf4s\" (UniqueName: \"kubernetes.io/projected/2302dbb7-38db-4752-a5d0-2d055da3aec3-kube-api-access-shf4s\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327480 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-slash\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327508 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327526 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-cni-bin\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327541 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-etc-kubernetes\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327559 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-script-lib\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327597 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-multus-certs\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327621 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-var-lib-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327635 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-cni-multus\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327653 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327671 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327694 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-daemon-config\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327710 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvzl\" (UniqueName: \"kubernetes.io/projected/7d4585c4-ac4a-4268-b25e-47509c17cfe2-kube-api-access-rnvzl\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327727 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z2qj\" (UniqueName: \"kubernetes.io/projected/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-kube-api-access-9z2qj\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327746 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-system-cni-dir\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327768 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-os-release\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327787 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-kubelet\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327807 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-netd\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327827 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-systemd-units\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327844 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgbzm\" (UniqueName: \"kubernetes.io/projected/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-kube-api-access-xgbzm\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327864 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cnibin\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327881 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2302dbb7-38db-4752-a5d0-2d055da3aec3-proxy-tls\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327898 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-ovn\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327915 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovn-node-metrics-cert\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327934 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-cni-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327958 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-log-socket\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.327976 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-cnibin\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328025 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-cni-binary-copy\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328061 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328085 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-bin\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328105 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-system-cni-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328123 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-kubelet\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328150 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-etc-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328170 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2302dbb7-38db-4752-a5d0-2d055da3aec3-mcd-auth-proxy-config\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328189 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328213 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-k8s-cni-cncf-io\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328230 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-netns\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328248 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328265 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-node-log\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328284 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-os-release\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328305 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-hostroot\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328323 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2302dbb7-38db-4752-a5d0-2d055da3aec3-rootfs\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328354 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-systemd\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328434 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-systemd\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328501 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-multus-certs\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328529 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-var-lib-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328552 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-cni-multus\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328614 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.328656 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-script-lib\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329271 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-config\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329339 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-conf-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329342 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-daemon-config\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329367 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-netns\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329409 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-socket-dir-parent\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329467 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-slash\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329490 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329544 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-cni-bin\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329580 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-etc-kubernetes\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329765 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329811 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-bin\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329969 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-system-cni-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.329977 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-cni-binary-copy\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330008 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-var-lib-kubelet\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330044 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-etc-openvswitch\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330103 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-k8s-cni-cncf-io\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330139 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-host-run-netns\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330229 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-node-log\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330261 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-kubelet\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330296 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-netd\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330296 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-os-release\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330330 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-hostroot\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330343 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-systemd-units\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330405 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-ovn\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330708 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-log-socket\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330896 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-multus-cni-dir\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.330886 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-cnibin\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.336277 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.336361 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.336380 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.336386 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.336405 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.336426 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:32Z","lastTransitionTime":"2025-11-29T07:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.338819 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovn-node-metrics-cert\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.344857 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n6mtz" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.361028 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:32 crc kubenswrapper[4731]: W1129 07:06:32.361654 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9dc0aca_1039_4a30_a83e_48bd320d0eae.slice/crio-b582aa897fa63a11c5ea3f3237bcc10f308f46a94803da01313eac61f9edefcc WatchSource:0}: Error finding container b582aa897fa63a11c5ea3f3237bcc10f308f46a94803da01313eac61f9edefcc: Status 404 returned error can't find the container with id b582aa897fa63a11c5ea3f3237bcc10f308f46a94803da01313eac61f9edefcc Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.398291 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z2qj\" (UniqueName: \"kubernetes.io/projected/5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8-kube-api-access-9z2qj\") pod \"multus-5rsbt\" (UID: \"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\") " pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454005 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2302dbb7-38db-4752-a5d0-2d055da3aec3-proxy-tls\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454064 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2302dbb7-38db-4752-a5d0-2d055da3aec3-mcd-auth-proxy-config\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454085 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454100 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454128 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2302dbb7-38db-4752-a5d0-2d055da3aec3-rootfs\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454166 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shf4s\" (UniqueName: \"kubernetes.io/projected/2302dbb7-38db-4752-a5d0-2d055da3aec3-kube-api-access-shf4s\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454187 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454211 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-system-cni-dir\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454230 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-os-release\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454252 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgbzm\" (UniqueName: \"kubernetes.io/projected/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-kube-api-access-xgbzm\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454269 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cnibin\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454350 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cnibin\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454396 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2302dbb7-38db-4752-a5d0-2d055da3aec3-rootfs\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.454749 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.455127 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.455281 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-os-release\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.455340 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-system-cni-dir\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.455401 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.455590 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2302dbb7-38db-4752-a5d0-2d055da3aec3-mcd-auth-proxy-config\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.459603 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2302dbb7-38db-4752-a5d0-2d055da3aec3-proxy-tls\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.460780 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.460816 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.460829 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.460848 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.460864 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:32Z","lastTransitionTime":"2025-11-29T07:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.564137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.564191 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.564207 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.564228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.564241 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:32Z","lastTransitionTime":"2025-11-29T07:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.667521 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.667580 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.667590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.667609 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.667627 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:32Z","lastTransitionTime":"2025-11-29T07:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.691530 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5rsbt" Nov 29 07:06:32 crc kubenswrapper[4731]: W1129 07:06:32.796908 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b1c5c4b_163c_4f54_bf6a_9de0e7619fb8.slice/crio-7eb95ae65c6a10c34103b2f15f564c7029def03a2b694ce5ea4287d7a6752e14 WatchSource:0}: Error finding container 7eb95ae65c6a10c34103b2f15f564c7029def03a2b694ce5ea4287d7a6752e14: Status 404 returned error can't find the container with id 7eb95ae65c6a10c34103b2f15f564c7029def03a2b694ce5ea4287d7a6752e14 Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.797770 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvzl\" (UniqueName: \"kubernetes.io/projected/7d4585c4-ac4a-4268-b25e-47509c17cfe2-kube-api-access-rnvzl\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.822238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.822287 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.822297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.822315 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.822328 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:32Z","lastTransitionTime":"2025-11-29T07:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.827464 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n6mtz" event={"ID":"f9dc0aca-1039-4a30-a83e-48bd320d0eae","Type":"ContainerStarted","Data":"b582aa897fa63a11c5ea3f3237bcc10f308f46a94803da01313eac61f9edefcc"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.867368 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5rsbt" event={"ID":"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8","Type":"ContainerStarted","Data":"7eb95ae65c6a10c34103b2f15f564c7029def03a2b694ce5ea4287d7a6752e14"} Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.867967 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shf4s\" (UniqueName: \"kubernetes.io/projected/2302dbb7-38db-4752-a5d0-2d055da3aec3-kube-api-access-shf4s\") pod \"machine-config-daemon-rscr8\" (UID: \"2302dbb7-38db-4752-a5d0-2d055da3aec3\") " pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.925304 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.927352 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.927365 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.927380 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:32 crc kubenswrapper[4731]: I1129 07:06:32.927394 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:32Z","lastTransitionTime":"2025-11-29T07:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.001592 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:06:33 crc kubenswrapper[4731]: W1129 07:06:33.014821 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2302dbb7_38db_4752_a5d0_2d055da3aec3.slice/crio-c051a7d701de1d489134f04cce6796c94ebf0b2918d543b65cc8e555dec88e4e WatchSource:0}: Error finding container c051a7d701de1d489134f04cce6796c94ebf0b2918d543b65cc8e555dec88e4e: Status 404 returned error can't find the container with id c051a7d701de1d489134f04cce6796c94ebf0b2918d543b65cc8e555dec88e4e Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.031245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.031323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.031345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.031366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.031384 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.083875 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.090470 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgbzm\" (UniqueName: \"kubernetes.io/projected/46a65d85-a3f6-4c1f-8a87-799ccfb861c7-kube-api-access-xgbzm\") pod \"multus-additional-cni-plugins-7sc4p\" (UID: \"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\") " pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.095111 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.100130 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-env-overrides\") pod \"ovnkube-node-x4t5j\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.135022 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.135080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.135091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.135115 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.135127 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.237516 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.237579 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.237594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.237615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.237631 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.238325 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.317776 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.326433 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.351309 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.351363 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.351378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.351398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.351411 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: W1129 07:06:33.369634 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d4585c4_ac4a_4268_b25e_47509c17cfe2.slice/crio-069e5a4a808e0afe3fbc3ba3fd78e91a237e6f0e24c0fe2ad992a6c2a40bc7c2 WatchSource:0}: Error finding container 069e5a4a808e0afe3fbc3ba3fd78e91a237e6f0e24c0fe2ad992a6c2a40bc7c2: Status 404 returned error can't find the container with id 069e5a4a808e0afe3fbc3ba3fd78e91a237e6f0e24c0fe2ad992a6c2a40bc7c2 Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.459033 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.460522 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.460578 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.460592 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.460613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.460628 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.481812 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.526599 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.548022 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:33 crc kubenswrapper[4731]: E1129 07:06:33.548431 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:06:41.548400879 +0000 UTC m=+40.438761992 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.551476 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.566623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.566679 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.566693 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.566714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.566728 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.669042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.669080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.669089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.669105 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.669115 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.771505 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.771547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.771556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.771585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.771597 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.808881 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:33 crc kubenswrapper[4731]: E1129 07:06:33.809034 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.809102 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:33 crc kubenswrapper[4731]: E1129 07:06:33.809149 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.809192 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:33 crc kubenswrapper[4731]: E1129 07:06:33.809230 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.871086 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5rsbt" event={"ID":"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8","Type":"ContainerStarted","Data":"4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.873140 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.873165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.873173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.873185 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.873197 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.873780 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n6mtz" event={"ID":"f9dc0aca-1039-4a30-a83e-48bd320d0eae","Type":"ContainerStarted","Data":"f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.874674 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerStarted","Data":"0093dadd46dd9a0f948a5cb7986b6b22196839c26451efe53c63057215bc9780"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.875811 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.875847 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.875858 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"c051a7d701de1d489134f04cce6796c94ebf0b2918d543b65cc8e555dec88e4e"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.876996 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.877023 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"069e5a4a808e0afe3fbc3ba3fd78e91a237e6f0e24c0fe2ad992a6c2a40bc7c2"} Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.975720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.975765 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.975776 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.975794 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:33 crc kubenswrapper[4731]: I1129 07:06:33.975807 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:33Z","lastTransitionTime":"2025-11-29T07:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.012683 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.034813 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.057672 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.060017 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.060106 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.060541 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.061043 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:42.061020978 +0000 UTC m=+40.951382081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.061331 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.061377 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:42.061368438 +0000 UTC m=+40.951729541 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.073904 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.092554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.092616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.092626 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.092642 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.092653 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:34Z","lastTransitionTime":"2025-11-29T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.127231 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.159952 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.169950 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.169988 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.170157 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.170175 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.170192 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.170252 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:42.170233377 +0000 UTC m=+41.060594480 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.170755 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.170789 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.170804 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:34 crc kubenswrapper[4731]: E1129 07:06:34.170847 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:42.170837004 +0000 UTC m=+41.061198107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.179744 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.195483 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.218059 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.243493 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.280865 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.312445 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.336704 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.356672 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.376747 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.721518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.721643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.721673 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.721711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.721728 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:34Z","lastTransitionTime":"2025-11-29T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.825063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.825116 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.825128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.825147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.825157 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:34Z","lastTransitionTime":"2025-11-29T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.929752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.929786 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.929794 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.929811 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:34 crc kubenswrapper[4731]: I1129 07:06:34.929820 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:34Z","lastTransitionTime":"2025-11-29T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:34.976709 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39" exitCode=0 Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:34.976798 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.006020 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerStarted","Data":"e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.072615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.072657 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.072669 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.072704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.072717 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.185706 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.189300 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.189329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.189338 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.189355 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.189366 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.293710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.293777 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.293791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.293836 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.293850 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.379907 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.399865 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.399910 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.399923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.399941 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.399954 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.428552 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.461361 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.478900 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.503354 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.503838 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.503848 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.503868 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.503879 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.547788 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8tvx8"] Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.548242 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.550737 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.550874 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.551734 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.551793 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.552258 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.567468 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.585494 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.600945 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-host\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.601022 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-serviceca\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.601049 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cfkp\" (UniqueName: \"kubernetes.io/projected/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-kube-api-access-7cfkp\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.609122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.609172 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.609185 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.609203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.609220 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.617101 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.633479 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.651990 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.694115 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.702085 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-host\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.702149 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-serviceca\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.702195 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cfkp\" (UniqueName: \"kubernetes.io/projected/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-kube-api-access-7cfkp\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.702225 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-host\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.703331 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-serviceca\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.711060 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.711095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.711108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.711127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.711143 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.711333 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.720700 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cfkp\" (UniqueName: \"kubernetes.io/projected/719caf85-c94c-4dc2-b28f-f5c4ec29e79e-kube-api-access-7cfkp\") pod \"node-ca-8tvx8\" (UID: \"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\") " pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.726546 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.741801 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.753219 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.773548 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.808344 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:35 crc kubenswrapper[4731]: E1129 07:06:35.808471 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.808928 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:35 crc kubenswrapper[4731]: E1129 07:06:35.808990 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.809043 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:35 crc kubenswrapper[4731]: E1129 07:06:35.809099 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.862068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.862120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.862134 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.862154 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.862167 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.865930 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.869126 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8tvx8" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.899954 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: W1129 07:06:35.900694 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod719caf85_c94c_4dc2_b28f_f5c4ec29e79e.slice/crio-fd21f52781d154fef74d2b3a67871e57136816550267aca407f227b6cf89d183 WatchSource:0}: Error finding container fd21f52781d154fef74d2b3a67871e57136816550267aca407f227b6cf89d183: Status 404 returned error can't find the container with id fd21f52781d154fef74d2b3a67871e57136816550267aca407f227b6cf89d183 Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.919624 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.943984 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.959924 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.965169 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.965223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.965235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.965262 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.965276 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:35Z","lastTransitionTime":"2025-11-29T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.983823 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:35 crc kubenswrapper[4731]: I1129 07:06:35.997322 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.020019 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.096682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.096715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.096723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.096739 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.096751 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:36Z","lastTransitionTime":"2025-11-29T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.097978 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.098206 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.100080 4731 generic.go:334] "Generic (PLEG): container finished" podID="46a65d85-a3f6-4c1f-8a87-799ccfb861c7" containerID="e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700" exitCode=0 Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.100229 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerDied","Data":"e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.102759 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8tvx8" event={"ID":"719caf85-c94c-4dc2-b28f-f5c4ec29e79e","Type":"ContainerStarted","Data":"fd21f52781d154fef74d2b3a67871e57136816550267aca407f227b6cf89d183"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.127262 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.184888 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.209593 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.209643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.209655 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.209775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.209794 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:36Z","lastTransitionTime":"2025-11-29T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.211705 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.237110 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.254823 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.289132 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.312614 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.312661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.312672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.312687 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.312697 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:36Z","lastTransitionTime":"2025-11-29T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.410886 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.443753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.443810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.443824 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.443849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.443861 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:36Z","lastTransitionTime":"2025-11-29T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.466534 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.554778 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.559924 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.559992 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.560005 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.560026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.560038 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:36Z","lastTransitionTime":"2025-11-29T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.686957 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.687001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.687009 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.687026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.687036 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:36Z","lastTransitionTime":"2025-11-29T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.704065 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.793587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.793638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.793651 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.793671 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.793681 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:36Z","lastTransitionTime":"2025-11-29T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.850595 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.896946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.897002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.897012 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.897052 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:36 crc kubenswrapper[4731]: I1129 07:06:36.897072 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:36Z","lastTransitionTime":"2025-11-29T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.000588 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.000678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.001714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.001837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.001858 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:37Z","lastTransitionTime":"2025-11-29T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.137011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.137057 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.137070 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.137094 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.137107 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:37Z","lastTransitionTime":"2025-11-29T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.149108 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerStarted","Data":"53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.151317 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8tvx8" event={"ID":"719caf85-c94c-4dc2-b28f-f5c4ec29e79e","Type":"ContainerStarted","Data":"d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.153919 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.333307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.333359 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.333370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.333387 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.333397 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:37Z","lastTransitionTime":"2025-11-29T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.429230 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.435411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.435700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.435781 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.435880 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.435954 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:37Z","lastTransitionTime":"2025-11-29T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.583251 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.583285 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.583296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.583311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.583319 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:37Z","lastTransitionTime":"2025-11-29T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.593531 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.686065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.686135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.686165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.686187 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.686199 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:37Z","lastTransitionTime":"2025-11-29T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.751892 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.788977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.789018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.789028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.789045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.789056 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:37Z","lastTransitionTime":"2025-11-29T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.799538 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.808387 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:37 crc kubenswrapper[4731]: E1129 07:06:37.808540 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.808677 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.808749 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:37 crc kubenswrapper[4731]: E1129 07:06:37.808858 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:37 crc kubenswrapper[4731]: E1129 07:06:37.808947 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.824228 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.895043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.895082 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.895092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.895108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.895118 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:37Z","lastTransitionTime":"2025-11-29T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:37 crc kubenswrapper[4731]: I1129 07:06:37.982630 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.074265 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.090958 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.113467 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.124210 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.124234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.124245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.124261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.124271 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:38Z","lastTransitionTime":"2025-11-29T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.161785 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.161850 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.228253 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.278466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.278504 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.278514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.278531 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.278544 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:38Z","lastTransitionTime":"2025-11-29T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.313318 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.330547 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.345552 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.360689 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.381863 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.381929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.381944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.381967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.381980 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:38Z","lastTransitionTime":"2025-11-29T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.383383 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.395369 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.407770 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.422953 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.426407 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.485639 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.486444 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.486545 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.486652 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.486751 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:38Z","lastTransitionTime":"2025-11-29T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.510105 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.524900 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.555888 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.573371 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.590059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.590380 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.590485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.590591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.590725 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:38Z","lastTransitionTime":"2025-11-29T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.650388 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.668768 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.687643 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.692881 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.692929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.692938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.692954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.692965 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:38Z","lastTransitionTime":"2025-11-29T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.704433 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.739994 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.768775 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.784966 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.796447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.796517 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.796530 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.796551 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.796586 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:38Z","lastTransitionTime":"2025-11-29T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.802741 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.818963 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.834199 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.899866 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.899905 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.899915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.899930 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:38 crc kubenswrapper[4731]: I1129 07:06:38.899942 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:38Z","lastTransitionTime":"2025-11-29T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.003863 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.003918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.003930 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.003949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.003963 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.146904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.147249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.147432 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.147524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.147663 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.167238 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.250020 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.250065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.250075 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.250091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.250103 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.353089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.353451 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.353546 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.353632 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.353752 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.456729 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.456895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.456979 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.457063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.457139 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.560010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.560079 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.560092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.560115 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.560127 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.663465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.663538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.663553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.663610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.663627 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.766686 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.766781 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.766793 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.766814 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.766829 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.806875 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.806984 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:39 crc kubenswrapper[4731]: E1129 07:06:39.807056 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:39 crc kubenswrapper[4731]: E1129 07:06:39.807173 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.807239 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:39 crc kubenswrapper[4731]: E1129 07:06:39.807417 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.869833 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.869885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.869900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.869919 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.869934 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.972909 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.972958 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.972971 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.972990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:39 crc kubenswrapper[4731]: I1129 07:06:39.973008 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:39Z","lastTransitionTime":"2025-11-29T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.076263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.076308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.076323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.076340 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.076349 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.174806 4731 generic.go:334] "Generic (PLEG): container finished" podID="46a65d85-a3f6-4c1f-8a87-799ccfb861c7" containerID="53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08" exitCode=0 Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.174907 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerDied","Data":"53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.178083 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.178114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.178124 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.178142 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.178154 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.179793 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.196775 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.209190 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.230496 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.243521 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.260040 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.281671 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.281723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.281735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.281756 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.281770 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.281845 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.298474 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.314345 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.330863 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.343834 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.358008 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.374872 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.385290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.385349 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.385364 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.385386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.385402 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.392833 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.408603 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.487725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.487780 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.487797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.487820 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.487836 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.590387 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.590460 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.590468 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.590484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.590495 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.693226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.693626 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.693638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.693655 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.693667 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.797372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.797462 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.797475 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.797495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.797507 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.901050 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.901100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.901109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.901154 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:40 crc kubenswrapper[4731]: I1129 07:06:40.901169 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:40Z","lastTransitionTime":"2025-11-29T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.003900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.003951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.003965 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.003987 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.003999 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.107485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.107944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.107962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.107984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.107997 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.186109 4731 generic.go:334] "Generic (PLEG): container finished" podID="46a65d85-a3f6-4c1f-8a87-799ccfb861c7" containerID="db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617" exitCode=0 Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.186168 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerDied","Data":"db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.201831 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.214011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.214059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.214076 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.214095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.214112 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.225911 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.248376 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.261731 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.277769 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.290137 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.302710 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.317393 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.317508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.317527 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.317552 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.317603 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.320903 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.338098 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.360543 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.375201 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.391853 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.407628 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.420855 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.420895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.420930 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.420948 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.420959 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.422865 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.524029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.524088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.524102 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.524123 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.524137 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.583869 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:41 crc kubenswrapper[4731]: E1129 07:06:41.584072 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:06:57.584052209 +0000 UTC m=+56.474413312 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.626783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.626830 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.626841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.626865 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.626878 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.730202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.730243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.730253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.730274 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.730287 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.806767 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.806768 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:41 crc kubenswrapper[4731]: E1129 07:06:41.806933 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.807029 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:41 crc kubenswrapper[4731]: E1129 07:06:41.807135 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:41 crc kubenswrapper[4731]: E1129 07:06:41.807229 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.819131 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.832869 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.833303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.833349 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.833364 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.833411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.833424 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.849194 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.863312 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.877877 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.890379 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.904496 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.917640 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.935329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.935361 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.935370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.935392 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.935404 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:41Z","lastTransitionTime":"2025-11-29T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.939972 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.956452 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.974993 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:41 crc kubenswrapper[4731]: I1129 07:06:41.999504 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.016991 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.034282 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.040398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.040745 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.040768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.040792 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.040809 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.090659 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.090747 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.090824 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.090897 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:58.09087688 +0000 UTC m=+56.981237983 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.090824 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.091024 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:58.091004244 +0000 UTC m=+56.981365347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.129475 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.129523 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.129535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.129553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.129584 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.144067 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.149271 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.149334 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.149349 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.149369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.149386 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.162504 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.166737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.166772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.166785 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.166810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.166823 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.183168 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.187612 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.187664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.187674 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.187691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.187702 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.191466 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.191511 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.191695 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.191721 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.191749 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.191801 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:58.191781099 +0000 UTC m=+57.082142202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.191695 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.191922 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.191939 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.191996 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:06:58.191976235 +0000 UTC m=+57.082337338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.195388 4731 generic.go:334] "Generic (PLEG): container finished" podID="46a65d85-a3f6-4c1f-8a87-799ccfb861c7" containerID="22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab" exitCode=0 Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.195461 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerDied","Data":"22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab"} Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.201871 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.202252 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.205441 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.205487 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.205496 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.205524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.205536 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.209107 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.219788 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: E1129 07:06:42.219978 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.222235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.222280 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.222299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.222324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.222340 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.224297 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.239038 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.257553 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.275456 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.292822 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.306507 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.321176 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.325490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.325539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.325559 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.325593 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.325605 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.339681 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.370183 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.385696 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.401280 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.415972 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.428853 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.428919 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.428935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.429261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.429311 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.431991 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.532955 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.532993 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.533002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.533015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.533026 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.636160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.636232 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.636244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.636261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.636274 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.740032 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.740387 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.740485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.740573 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.740661 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.843251 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.843310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.843325 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.843346 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.843362 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.947226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.947287 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.947299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.947316 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:42 crc kubenswrapper[4731]: I1129 07:06:42.947327 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:42Z","lastTransitionTime":"2025-11-29T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.049757 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.049806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.049817 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.049835 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.049848 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.152972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.153510 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.153526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.153556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.153593 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.209899 4731 generic.go:334] "Generic (PLEG): container finished" podID="46a65d85-a3f6-4c1f-8a87-799ccfb861c7" containerID="76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f" exitCode=0 Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.209967 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerDied","Data":"76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.226879 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.247071 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.256303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.256358 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.256374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.256397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.256412 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.265087 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.281718 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.302717 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.315942 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.330015 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.346672 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.360830 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.360870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.360882 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.360897 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.360906 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.361441 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.376876 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.394702 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.411986 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.428861 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.446827 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.464224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.464677 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.464912 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.465024 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.465116 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.568329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.568374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.568386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.568404 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.568417 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.672267 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.672335 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.672353 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.672375 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.672394 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.776372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.776952 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.776972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.776994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.777006 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.806329 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.806382 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:43 crc kubenswrapper[4731]: E1129 07:06:43.806518 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.806546 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:43 crc kubenswrapper[4731]: E1129 07:06:43.806707 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:43 crc kubenswrapper[4731]: E1129 07:06:43.806928 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.880263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.880293 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.880308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.880330 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.880342 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.982781 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.982841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.982851 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.982867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:43 crc kubenswrapper[4731]: I1129 07:06:43.982900 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:43Z","lastTransitionTime":"2025-11-29T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.086258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.086305 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.086316 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.086333 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.086345 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.190394 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.190471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.190486 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.190505 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.190531 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.218450 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.218838 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.218892 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.218903 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.225292 4731 generic.go:334] "Generic (PLEG): container finished" podID="46a65d85-a3f6-4c1f-8a87-799ccfb861c7" containerID="a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587" exitCode=0 Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.225320 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerDied","Data":"a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.235997 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.254254 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.272352 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.288143 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.294864 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.294936 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.294951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.294974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.294991 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.299873 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.300017 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.309036 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.327107 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.342830 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.368016 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.389605 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.399027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.399737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.399809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.399834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.399849 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.411197 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.428989 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.445442 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.464817 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.478625 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.494059 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.502838 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.502916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.502929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.502949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.502962 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.515784 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.534674 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.551642 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.563427 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.578374 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.595148 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.606250 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.606300 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.606312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.606329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.606340 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.610654 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.629767 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.642437 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.658479 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.674283 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.693525 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.708788 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.708839 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.708851 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.708872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.708885 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.711519 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.811842 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.811888 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.811898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.811913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.811924 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.915138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.915195 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.915206 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.915224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:44 crc kubenswrapper[4731]: I1129 07:06:44.915236 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:44Z","lastTransitionTime":"2025-11-29T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.017892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.018030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.018054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.018076 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.018094 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.121099 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.121148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.121158 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.121176 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.121189 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.224092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.224129 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.224137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.224152 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.224161 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.232208 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" event={"ID":"46a65d85-a3f6-4c1f-8a87-799ccfb861c7","Type":"ContainerStarted","Data":"9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.248236 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.275232 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.287944 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.305430 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.323292 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.326644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.326700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.326711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.326728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.326739 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.339731 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.357419 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.376424 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.396597 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.414704 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.428834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.428893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.428902 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.428921 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.428931 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.432905 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.447251 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.461180 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.478418 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.532618 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.532664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.532674 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.532691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.532702 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.635729 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.635791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.635801 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.635816 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.635827 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.738772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.738827 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.738837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.738857 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.738867 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.806754 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.806875 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:45 crc kubenswrapper[4731]: E1129 07:06:45.806962 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.806767 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:45 crc kubenswrapper[4731]: E1129 07:06:45.807116 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:45 crc kubenswrapper[4731]: E1129 07:06:45.807228 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.841740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.841803 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.841816 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.841837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.841851 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.944901 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.945226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.945332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.945402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:45 crc kubenswrapper[4731]: I1129 07:06:45.945467 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:45Z","lastTransitionTime":"2025-11-29T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.049207 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.049259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.049270 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.049288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.049300 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.153266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.153324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.153337 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.153357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.153372 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.256477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.256836 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.256923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.257010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.257078 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.360296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.360715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.360731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.360750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.360763 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.463389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.463437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.463452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.463470 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.463483 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.566995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.567050 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.567063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.567084 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.567098 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.670951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.671013 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.671024 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.671045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.671056 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.774429 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.774492 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.774511 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.774542 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.774594 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.877517 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.877908 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.878005 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.878106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.878195 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.980973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.981015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.981031 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.981054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:46 crc kubenswrapper[4731]: I1129 07:06:46.981067 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:46Z","lastTransitionTime":"2025-11-29T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.083989 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.084066 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.084078 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.084106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.084126 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.187198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.187240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.187250 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.187265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.187275 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.246188 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/0.log" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.252371 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66" exitCode=1 Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.252481 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.253630 4731 scope.go:117] "RemoveContainer" containerID="55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.275079 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.291735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.291787 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.291797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.291814 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.291827 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.294619 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.310598 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.332015 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.348905 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.394239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.394308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.394321 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.394343 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.394358 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.405385 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:46Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.450664 5998 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.451031 5998 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:06:46.451048 5998 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 07:06:46.451072 5998 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:06:46.451092 5998 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 07:06:46.451098 5998 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:06:46.451110 5998 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:06:46.451129 5998 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:06:46.451133 5998 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:06:46.451154 5998 factory.go:656] Stopping watch factory\\\\nI1129 07:06:46.451159 5998 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:06:46.451168 5998 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.430455 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.456988 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.474608 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.492496 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.497418 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.497509 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.497528 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.497555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.497588 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.510879 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.525546 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.543094 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.563846 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.600970 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.601037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.601050 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.601080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.601099 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.697379 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj"] Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.698051 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.701235 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.701255 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.704139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.704178 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.704188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.704208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.704221 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.714888 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.728461 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.744457 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.753068 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de2552c-90ca-42ab-94c0-365f2c2380d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.753326 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l44ds\" (UniqueName: \"kubernetes.io/projected/6de2552c-90ca-42ab-94c0-365f2c2380d5-kube-api-access-l44ds\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.753452 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de2552c-90ca-42ab-94c0-365f2c2380d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.753625 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de2552c-90ca-42ab-94c0-365f2c2380d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.760283 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.781671 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:46Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.450664 5998 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.451031 5998 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:06:46.451048 5998 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 07:06:46.451072 5998 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:06:46.451092 5998 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 07:06:46.451098 5998 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:06:46.451110 5998 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:06:46.451129 5998 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:06:46.451133 5998 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:06:46.451154 5998 factory.go:656] Stopping watch factory\\\\nI1129 07:06:46.451159 5998 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:06:46.451168 5998 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.804056 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.805956 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.806092 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.806035 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:47 crc kubenswrapper[4731]: E1129 07:06:47.806362 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:47 crc kubenswrapper[4731]: E1129 07:06:47.806466 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:47 crc kubenswrapper[4731]: E1129 07:06:47.806616 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.807478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.807536 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.807550 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.807609 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.807626 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.822097 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.837588 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.854030 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.854419 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de2552c-90ca-42ab-94c0-365f2c2380d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.854463 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de2552c-90ca-42ab-94c0-365f2c2380d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.854493 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l44ds\" (UniqueName: \"kubernetes.io/projected/6de2552c-90ca-42ab-94c0-365f2c2380d5-kube-api-access-l44ds\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.854520 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de2552c-90ca-42ab-94c0-365f2c2380d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.855230 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de2552c-90ca-42ab-94c0-365f2c2380d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.855361 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de2552c-90ca-42ab-94c0-365f2c2380d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.860810 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de2552c-90ca-42ab-94c0-365f2c2380d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.873981 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.877733 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l44ds\" (UniqueName: \"kubernetes.io/projected/6de2552c-90ca-42ab-94c0-365f2c2380d5-kube-api-access-l44ds\") pod \"ovnkube-control-plane-749d76644c-7d5hj\" (UID: \"6de2552c-90ca-42ab-94c0-365f2c2380d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.890590 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.909696 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.910696 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.910727 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.910738 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.910759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.910778 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:47Z","lastTransitionTime":"2025-11-29T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.924257 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.939685 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:47 crc kubenswrapper[4731]: I1129 07:06:47.958585 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:47Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.012335 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.013907 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.013954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.013964 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.013990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.014002 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.117116 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.117171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.117185 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.117206 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.117218 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.219495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.219543 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.219555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.219587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.219599 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.258479 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/0.log" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.262037 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.262613 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.263310 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" event={"ID":"6de2552c-90ca-42ab-94c0-365f2c2380d5","Type":"ContainerStarted","Data":"f5337b421539428dead50c0b7f82783258d4cb4ca1801c46b15200ed674aeb8a"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.281196 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.298654 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.312643 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.322762 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.322810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.322821 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.322837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.322857 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.325688 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.343425 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.358308 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.374469 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.388829 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.400798 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.414678 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.426340 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.426394 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.426407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.426430 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.426444 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.437507 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.452409 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.472896 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:46Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.450664 5998 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.451031 5998 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:06:46.451048 5998 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 07:06:46.451072 5998 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:06:46.451092 5998 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 07:06:46.451098 5998 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:06:46.451110 5998 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:06:46.451129 5998 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:06:46.451133 5998 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:06:46.451154 5998 factory.go:656] Stopping watch factory\\\\nI1129 07:06:46.451159 5998 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:06:46.451168 5998 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.486428 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.501591 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.529594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.529660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.529674 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.529695 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.529708 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.632756 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.632822 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.632835 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.632859 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.632874 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.736226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.736285 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.736325 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.736347 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.736362 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.804661 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2pp9l"] Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.805663 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:48 crc kubenswrapper[4731]: E1129 07:06:48.805745 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.821265 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.837986 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.839280 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.839311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.839323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.839341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.839353 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.851960 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.865970 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.866048 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwn7\" (UniqueName: \"kubernetes.io/projected/944440c1-51b2-4c49-b5fd-4c024fc33ace-kube-api-access-zkwn7\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.869255 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.890589 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.906394 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.925217 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.942326 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.942373 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.942383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.942402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.942415 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:48Z","lastTransitionTime":"2025-11-29T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.944250 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.959791 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.966885 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.966986 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwn7\" (UniqueName: \"kubernetes.io/projected/944440c1-51b2-4c49-b5fd-4c024fc33ace-kube-api-access-zkwn7\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:48 crc kubenswrapper[4731]: E1129 07:06:48.967122 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:48 crc kubenswrapper[4731]: E1129 07:06:48.967229 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs podName:944440c1-51b2-4c49-b5fd-4c024fc33ace nodeName:}" failed. No retries permitted until 2025-11-29 07:06:49.467203396 +0000 UTC m=+48.357564499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs") pod "network-metrics-daemon-2pp9l" (UID: "944440c1-51b2-4c49-b5fd-4c024fc33ace") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.976843 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.985447 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwn7\" (UniqueName: \"kubernetes.io/projected/944440c1-51b2-4c49-b5fd-4c024fc33ace-kube-api-access-zkwn7\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:48 crc kubenswrapper[4731]: I1129 07:06:48.993122 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.014827 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:46Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.450664 5998 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.451031 5998 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:06:46.451048 5998 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 07:06:46.451072 5998 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:06:46.451092 5998 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 07:06:46.451098 5998 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:06:46.451110 5998 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:06:46.451129 5998 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:06:46.451133 5998 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:06:46.451154 5998 factory.go:656] Stopping watch factory\\\\nI1129 07:06:46.451159 5998 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:06:46.451168 5998 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.028367 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.045138 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.045417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.045456 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.045468 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.045489 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.045499 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.062944 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.080666 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.148849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.148893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.148904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.148924 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.148937 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.252034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.252082 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.252098 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.252119 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.252132 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.268763 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" event={"ID":"6de2552c-90ca-42ab-94c0-365f2c2380d5","Type":"ContainerStarted","Data":"9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.268817 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" event={"ID":"6de2552c-90ca-42ab-94c0-365f2c2380d5","Type":"ContainerStarted","Data":"ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.270768 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/1.log" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.271393 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/0.log" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.274234 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff" exitCode=1 Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.274278 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.274330 4731 scope.go:117] "RemoveContainer" containerID="55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.274940 4731 scope.go:117] "RemoveContainer" containerID="ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff" Nov 29 07:06:49 crc kubenswrapper[4731]: E1129 07:06:49.275110 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.288723 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.302038 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.314296 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.328730 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.341038 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.355091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.355164 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.355177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.355197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.355210 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.358030 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.372647 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.383619 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.397898 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.414957 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.430898 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.454999 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:46Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.450664 5998 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.451031 5998 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:06:46.451048 5998 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 07:06:46.451072 5998 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:06:46.451092 5998 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 07:06:46.451098 5998 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:06:46.451110 5998 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:06:46.451129 5998 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:06:46.451133 5998 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:06:46.451154 5998 factory.go:656] Stopping watch factory\\\\nI1129 07:06:46.451159 5998 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:06:46.451168 5998 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.457547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.457594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.457606 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.457623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.457632 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.469391 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.471876 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:49 crc kubenswrapper[4731]: E1129 07:06:49.472054 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:49 crc kubenswrapper[4731]: E1129 07:06:49.472168 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs podName:944440c1-51b2-4c49-b5fd-4c024fc33ace nodeName:}" failed. No retries permitted until 2025-11-29 07:06:50.472143543 +0000 UTC m=+49.362504646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs") pod "network-metrics-daemon-2pp9l" (UID: "944440c1-51b2-4c49-b5fd-4c024fc33ace") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.481382 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.499156 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.512609 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.526683 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.540461 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.557037 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.559895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.559928 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.559941 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.559961 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.559974 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.569217 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.582242 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.595324 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.606468 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.622374 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.639055 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.651646 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.663672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.663715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.663729 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.663750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.663764 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.664663 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.677063 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.690155 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.704649 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.716558 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.737527 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55decf99f11003a6a7c796114439c20b042075e1b233f967fa4e758611e04f66\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:46Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.450664 5998 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:06:46.451031 5998 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 07:06:46.451048 5998 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 07:06:46.451072 5998 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:06:46.451077 5998 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:06:46.451092 5998 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 07:06:46.451098 5998 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:06:46.451110 5998 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 07:06:46.451129 5998 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 07:06:46.451133 5998 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:06:46.451154 5998 factory.go:656] Stopping watch factory\\\\nI1129 07:06:46.451159 5998 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:06:46.451168 5998 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"penshift-monitoring/cluster-monitoring-operator for network=default\\\\nI1129 07:06:48.080586 6132 services_controller.go:434] Service openshift-kube-scheduler-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 760c7338-f39e-4136-9d29-d6fccbd607c1 4364 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc005f6d66b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.233,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.768043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.768411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.768503 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.768621 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.768725 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.805912 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.806044 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.805912 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:49 crc kubenswrapper[4731]: E1129 07:06:49.806125 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:49 crc kubenswrapper[4731]: E1129 07:06:49.806230 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:49 crc kubenswrapper[4731]: E1129 07:06:49.806373 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.872225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.872291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.872304 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.872324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.872344 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.974880 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.974917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.974935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.974966 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:49 crc kubenswrapper[4731]: I1129 07:06:49.974979 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:49Z","lastTransitionTime":"2025-11-29T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.077443 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.077482 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.077499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.077519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.077532 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.180160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.180208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.180220 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.180242 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.180254 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.281654 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/1.log" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.282709 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.283449 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.283853 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.284149 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.285925 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.286652 4731 scope.go:117] "RemoveContainer" containerID="ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff" Nov 29 07:06:50 crc kubenswrapper[4731]: E1129 07:06:50.286875 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.303402 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.317584 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.334053 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.348838 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.371862 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"penshift-monitoring/cluster-monitoring-operator for network=default\\\\nI1129 07:06:48.080586 6132 services_controller.go:434] Service openshift-kube-scheduler-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 760c7338-f39e-4136-9d29-d6fccbd607c1 4364 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc005f6d66b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.233,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.387256 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.388755 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.388821 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.388841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.388867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.388886 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.401158 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.417156 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.432254 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.452869 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.470853 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.480892 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:50 crc kubenswrapper[4731]: E1129 07:06:50.481124 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:50 crc kubenswrapper[4731]: E1129 07:06:50.481251 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs podName:944440c1-51b2-4c49-b5fd-4c024fc33ace nodeName:}" failed. No retries permitted until 2025-11-29 07:06:52.481221194 +0000 UTC m=+51.371582297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs") pod "network-metrics-daemon-2pp9l" (UID: "944440c1-51b2-4c49-b5fd-4c024fc33ace") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.485252 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.491172 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.491240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.491259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.491286 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.491305 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.500736 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.517831 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.530144 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.543592 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.593942 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.593991 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.594000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.594017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.594028 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.696999 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.697062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.697079 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.697102 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.697118 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.799954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.800033 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.800049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.800071 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.800087 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.806448 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:50 crc kubenswrapper[4731]: E1129 07:06:50.806561 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.903111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.903161 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.903172 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.903193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:50 crc kubenswrapper[4731]: I1129 07:06:50.903204 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:50Z","lastTransitionTime":"2025-11-29T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.005883 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.005924 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.005935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.005953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.005965 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.108789 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.108863 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.108892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.108925 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.108957 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.212759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.212850 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.212872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.212901 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.212932 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.319409 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.319479 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.319499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.319613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.319637 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.422227 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.422279 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.422290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.422308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.422320 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.525880 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.525929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.525938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.525957 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.525966 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.629752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.629807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.629824 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.629845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.629858 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.732947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.733009 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.733025 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.733047 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.733064 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.806760 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.806824 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.806874 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:51 crc kubenswrapper[4731]: E1129 07:06:51.806937 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:51 crc kubenswrapper[4731]: E1129 07:06:51.807103 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:51 crc kubenswrapper[4731]: E1129 07:06:51.807204 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.826658 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.836146 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.836202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.836215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.836233 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.836245 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.841778 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.853926 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.868619 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.885338 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.901454 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.918288 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.937897 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.937956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.937979 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.937999 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.938014 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:51Z","lastTransitionTime":"2025-11-29T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.939448 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.955442 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.985467 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:51 crc kubenswrapper[4731]: I1129 07:06:51.998922 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.016047 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"penshift-monitoring/cluster-monitoring-operator for network=default\\\\nI1129 07:06:48.080586 6132 services_controller.go:434] Service openshift-kube-scheduler-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 760c7338-f39e-4136-9d29-d6fccbd607c1 4364 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc005f6d66b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.233,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.026929 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.040120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.040175 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.040188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.040209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.040223 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.042628 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.055144 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.066435 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.143351 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.143415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.143433 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.143459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.143490 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.246704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.246765 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.246783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.246806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.246841 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.349716 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.349772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.349784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.349804 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.349817 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.452541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.452609 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.452628 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.452654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.452674 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.509694 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.509982 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.510137 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs podName:944440c1-51b2-4c49-b5fd-4c024fc33ace nodeName:}" failed. No retries permitted until 2025-11-29 07:06:56.510105414 +0000 UTC m=+55.400466707 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs") pod "network-metrics-daemon-2pp9l" (UID: "944440c1-51b2-4c49-b5fd-4c024fc33ace") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.523650 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.523715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.523728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.523751 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.523765 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.543073 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.548685 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.548726 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.548740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.548759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.548774 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.561552 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.566328 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.566382 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.566396 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.566420 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.566435 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.581015 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.585248 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.585315 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.585332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.585355 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.585393 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.598453 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.602357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.602401 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.602416 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.602435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.602447 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.614011 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.614133 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.616429 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.616465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.616478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.616497 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.616508 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.719540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.719604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.719617 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.719636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.719651 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.806616 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:52 crc kubenswrapper[4731]: E1129 07:06:52.806797 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.822812 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.822860 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.822875 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.822894 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.822909 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.925459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.925530 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.925546 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.925595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:52 crc kubenswrapper[4731]: I1129 07:06:52.925610 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:52Z","lastTransitionTime":"2025-11-29T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.028701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.028765 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.028780 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.028801 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.028845 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.132030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.132072 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.132085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.132101 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.132112 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.235350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.235399 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.235412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.235431 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.235443 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.338735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.338779 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.338789 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.338806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.338819 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.442477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.442535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.442550 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.442594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.442612 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.546030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.546085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.546106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.546131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.546148 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.648757 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.648809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.648828 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.648851 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.648869 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.752163 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.752212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.752223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.752240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.752252 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.806623 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.806680 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.806662 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:53 crc kubenswrapper[4731]: E1129 07:06:53.806808 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:53 crc kubenswrapper[4731]: E1129 07:06:53.806871 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:53 crc kubenswrapper[4731]: E1129 07:06:53.806954 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.855311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.855357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.855369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.855643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.855661 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.958307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.958360 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.958371 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.958392 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:53 crc kubenswrapper[4731]: I1129 07:06:53.958405 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:53Z","lastTransitionTime":"2025-11-29T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.061613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.061667 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.061678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.061695 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.061707 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.165641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.165705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.165722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.165742 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.165756 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.269299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.269348 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.269359 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.269376 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.269389 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.372062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.372123 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.372139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.372161 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.372175 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.475191 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.475255 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.475267 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.475289 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.475305 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.578731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.578772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.578783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.578799 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.578810 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.682435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.682492 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.682505 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.682529 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.682545 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.786058 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.786129 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.786143 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.786162 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.786176 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.805815 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:54 crc kubenswrapper[4731]: E1129 07:06:54.805995 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.888974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.889026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.889037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.889061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.889077 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.991955 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.992008 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.992021 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.992038 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:54 crc kubenswrapper[4731]: I1129 07:06:54.992050 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:54Z","lastTransitionTime":"2025-11-29T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.095397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.095462 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.095477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.095501 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.095515 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.198587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.198658 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.198669 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.198688 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.198706 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.302931 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.302992 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.303010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.303035 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.303054 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.405698 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.405750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.405760 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.405783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.405796 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.508481 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.508543 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.508557 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.508607 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.508620 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.611350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.611399 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.611410 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.611431 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.611441 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.714956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.715007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.715022 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.715044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.715060 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.806597 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.806611 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:55 crc kubenswrapper[4731]: E1129 07:06:55.806812 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.806638 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:55 crc kubenswrapper[4731]: E1129 07:06:55.806924 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:55 crc kubenswrapper[4731]: E1129 07:06:55.807040 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.818902 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.818958 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.818970 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.819003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.819018 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.870633 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.881583 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.886452 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.901016 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.915362 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.922035 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.922097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.922111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.922132 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.922151 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:55Z","lastTransitionTime":"2025-11-29T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.940250 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"penshift-monitoring/cluster-monitoring-operator for network=default\\\\nI1129 07:06:48.080586 6132 services_controller.go:434] Service openshift-kube-scheduler-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 760c7338-f39e-4136-9d29-d6fccbd607c1 4364 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc005f6d66b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.233,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.955557 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.969895 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:55 crc kubenswrapper[4731]: I1129 07:06:55.987245 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.001015 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.018403 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.025062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.025103 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.025119 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.025139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.025152 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.042284 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.060850 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.081931 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.116423 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.128395 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.128444 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.128457 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.128478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.128492 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.132043 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.146691 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.164978 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:06:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.231733 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.231794 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.231807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.231827 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.231840 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.335159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.335211 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.335223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.335241 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.335256 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.438014 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.438068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.438077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.438096 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.438116 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.540995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.541055 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.541065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.541081 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.541093 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.557544 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:56 crc kubenswrapper[4731]: E1129 07:06:56.557774 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:56 crc kubenswrapper[4731]: E1129 07:06:56.557836 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs podName:944440c1-51b2-4c49-b5fd-4c024fc33ace nodeName:}" failed. No retries permitted until 2025-11-29 07:07:04.557818204 +0000 UTC m=+63.448179307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs") pod "network-metrics-daemon-2pp9l" (UID: "944440c1-51b2-4c49-b5fd-4c024fc33ace") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.643486 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.643539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.643556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.643594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.643606 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.747291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.747343 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.747353 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.747371 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.747386 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.805969 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:56 crc kubenswrapper[4731]: E1129 07:06:56.806134 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.850886 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.850967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.850989 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.851011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.851025 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.954439 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.954489 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.954499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.954517 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:56 crc kubenswrapper[4731]: I1129 07:06:56.954532 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:56Z","lastTransitionTime":"2025-11-29T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.057188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.057235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.057246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.057263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.057274 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.160810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.160881 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.160893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.160918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.160930 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.264822 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.264895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.264918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.264947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.264962 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.367814 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.367885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.367898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.367915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.367930 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.471368 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.471414 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.471424 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.471446 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.471459 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.574941 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.574990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.575002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.575018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.575028 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.668725 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:06:57 crc kubenswrapper[4731]: E1129 07:06:57.669015 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:07:29.668975088 +0000 UTC m=+88.559336191 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.677474 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.677523 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.677538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.677560 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.677602 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.781012 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.781078 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.781091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.781111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.781128 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.806438 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.806438 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.806472 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:57 crc kubenswrapper[4731]: E1129 07:06:57.806663 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:57 crc kubenswrapper[4731]: E1129 07:06:57.806738 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:57 crc kubenswrapper[4731]: E1129 07:06:57.806796 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.885064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.885130 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.885139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.885159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.885174 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.988226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.988303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.988318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.988341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:57 crc kubenswrapper[4731]: I1129 07:06:57.988354 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:57Z","lastTransitionTime":"2025-11-29T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.091238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.091309 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.091324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.091346 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.091811 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.175696 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.176161 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.176233 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:07:30.176210731 +0000 UTC m=+89.066571844 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.176329 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.176449 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.176484 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:07:30.176473149 +0000 UTC m=+89.066834252 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.194920 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.194974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.194984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.195003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.195015 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.277286 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.277368 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.277593 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.277631 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.277648 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.277723 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:07:30.277697127 +0000 UTC m=+89.168058230 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.277593 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.277748 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.277757 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.277783 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:07:30.277775399 +0000 UTC m=+89.168136502 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.298541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.298631 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.298646 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.298667 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.298686 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.401956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.402008 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.402020 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.402039 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.402052 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.504676 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.504729 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.504744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.504791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.504803 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.608209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.608265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.608277 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.608299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.608310 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.711829 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.711882 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.711898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.711918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.711932 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.806211 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:06:58 crc kubenswrapper[4731]: E1129 07:06:58.806451 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.814669 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.814720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.814730 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.814748 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.814762 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.917867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.917914 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.917923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.917940 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:58 crc kubenswrapper[4731]: I1129 07:06:58.917952 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:58Z","lastTransitionTime":"2025-11-29T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.021531 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.021613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.021627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.021650 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.021663 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.124836 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.124881 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.124893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.124910 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.124923 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.227604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.227661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.227677 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.227697 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.227714 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.330371 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.330428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.330438 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.330457 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.330468 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.433713 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.433774 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.433787 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.433811 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.433824 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.536182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.536236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.536247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.536266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.536280 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.639249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.639299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.639309 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.639336 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.639356 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.741253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.741312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.741322 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.741344 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.741354 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.806118 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:06:59 crc kubenswrapper[4731]: E1129 07:06:59.806291 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.806544 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:06:59 crc kubenswrapper[4731]: E1129 07:06:59.806627 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.807168 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:06:59 crc kubenswrapper[4731]: E1129 07:06:59.807402 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.844415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.844473 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.844485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.844502 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.844516 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.947164 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.947519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.947636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.947745 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:06:59 crc kubenswrapper[4731]: I1129 07:06:59.947837 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:06:59Z","lastTransitionTime":"2025-11-29T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.051211 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.051273 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.051290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.051319 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.051336 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.155067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.155725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.155766 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.155799 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.155821 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.258669 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.258732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.258746 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.258768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.258783 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.362176 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.362244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.362266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.362288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.362305 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.465838 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.465937 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.465954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.465974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.465989 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.568430 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.568474 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.568483 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.568499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.568511 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.671450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.671488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.671497 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.671512 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.671523 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.774349 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.774396 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.774408 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.774428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.774441 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.806700 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:00 crc kubenswrapper[4731]: E1129 07:07:00.806878 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.877480 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.877555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.877622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.877653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.877674 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.980482 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.980553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.980610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.980636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:00 crc kubenswrapper[4731]: I1129 07:07:00.980652 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:00Z","lastTransitionTime":"2025-11-29T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.084185 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.084244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.084258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.084279 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.084296 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.187200 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.187250 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.187259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.187295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.187321 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.290286 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.290369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.290406 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.290432 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.290447 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.393635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.393700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.393712 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.393729 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.393761 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.496653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.496700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.496710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.496726 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.496736 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.599641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.600236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.600355 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.600459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.600619 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.703938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.703994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.704004 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.704024 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.704036 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.807260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.807303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.807324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.807349 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.807405 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.807605 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.807666 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:01 crc kubenswrapper[4731]: E1129 07:07:01.807915 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.808022 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:01 crc kubenswrapper[4731]: E1129 07:07:01.808116 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:01 crc kubenswrapper[4731]: E1129 07:07:01.808429 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.825332 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.838744 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.852646 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.864222 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.878215 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.894336 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.907847 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.911791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.911828 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.911854 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.911870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.911880 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:01Z","lastTransitionTime":"2025-11-29T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.923024 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.938841 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.951683 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.966085 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.979434 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:01 crc kubenswrapper[4731]: I1129 07:07:01.992558 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.011186 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"penshift-monitoring/cluster-monitoring-operator for network=default\\\\nI1129 07:06:48.080586 6132 services_controller.go:434] Service openshift-kube-scheduler-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 760c7338-f39e-4136-9d29-d6fccbd607c1 4364 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc005f6d66b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.233,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.014812 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.014915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.014930 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.014976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.014996 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.028956 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.044193 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.058960 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.118228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.118293 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.118308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.118329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.118346 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.220922 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.220974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.220985 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.221006 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.221020 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.324073 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.324126 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.324137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.324155 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.324168 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.427207 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.427291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.427306 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.427374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.427392 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.530718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.530818 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.530834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.530861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.530877 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.633049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.633113 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.633128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.633149 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.633163 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.638057 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.638118 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.638131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.638149 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.638162 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: E1129 07:07:02.653049 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.657001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.657047 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.657060 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.657085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.657099 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: E1129 07:07:02.670420 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.674807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.674889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.674907 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.674926 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.674962 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: E1129 07:07:02.687285 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.690538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.690571 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.690585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.690616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.690625 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: E1129 07:07:02.703087 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.706768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.706796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.706805 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.706822 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.706832 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: E1129 07:07:02.719082 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:02 crc kubenswrapper[4731]: E1129 07:07:02.719232 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.735249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.735282 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.735293 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.735308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.735320 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.806486 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:02 crc kubenswrapper[4731]: E1129 07:07:02.806682 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.837978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.838018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.838029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.838048 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.838060 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.941128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.941198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.941212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.941236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:02 crc kubenswrapper[4731]: I1129 07:07:02.941257 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:02Z","lastTransitionTime":"2025-11-29T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.044603 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.044651 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.044663 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.044683 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.044699 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.147794 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.147835 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.147845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.147860 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.147869 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.251403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.251439 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.251447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.251465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.251476 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.354188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.354274 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.354298 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.354328 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.354352 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.457452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.457495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.457507 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.457527 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.457541 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.560419 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.560467 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.560494 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.560514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.560529 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.663596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.664244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.664315 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.664412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.664485 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.767499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.767867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.767950 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.768064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.768198 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.806152 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:03 crc kubenswrapper[4731]: E1129 07:07:03.806330 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.806188 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:03 crc kubenswrapper[4731]: E1129 07:07:03.806424 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.806155 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:03 crc kubenswrapper[4731]: E1129 07:07:03.806879 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.871753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.871790 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.871802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.871822 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.871835 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.974635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.974689 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.974702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.974721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:03 crc kubenswrapper[4731]: I1129 07:07:03.974742 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:03Z","lastTransitionTime":"2025-11-29T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.077994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.078050 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.078060 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.078078 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.078093 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.181601 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.181644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.181653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.181669 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.181678 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.284901 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.285176 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.285259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.285400 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.285486 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.387814 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.388168 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.388240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.388310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.388378 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.491506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.491558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.491590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.491609 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.491625 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.594284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.594632 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.594804 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.595008 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.595106 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.656344 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:04 crc kubenswrapper[4731]: E1129 07:07:04.656555 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:07:04 crc kubenswrapper[4731]: E1129 07:07:04.656660 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs podName:944440c1-51b2-4c49-b5fd-4c024fc33ace nodeName:}" failed. No retries permitted until 2025-11-29 07:07:20.656636885 +0000 UTC m=+79.546997988 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs") pod "network-metrics-daemon-2pp9l" (UID: "944440c1-51b2-4c49-b5fd-4c024fc33ace") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.698316 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.698376 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.698389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.698413 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.698434 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.802048 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.802106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.802117 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.802138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.802151 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.806515 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:04 crc kubenswrapper[4731]: E1129 07:07:04.806725 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.905516 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.905580 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.905591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.905609 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:04 crc kubenswrapper[4731]: I1129 07:07:04.905631 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:04Z","lastTransitionTime":"2025-11-29T07:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.008286 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.008350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.008366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.008387 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.008402 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.111749 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.111796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.111809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.111831 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.111844 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.215365 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.215435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.215450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.215472 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.215487 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.318180 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.318247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.318260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.318282 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.318297 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.421824 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.421885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.421899 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.421923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.421944 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.524620 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.524654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.524662 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.524677 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.524687 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.628329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.628374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.628386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.628407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.628418 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.730943 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.730993 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.731010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.731032 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.731044 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.805812 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.805852 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.805812 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:05 crc kubenswrapper[4731]: E1129 07:07:05.806227 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:05 crc kubenswrapper[4731]: E1129 07:07:05.806359 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:05 crc kubenswrapper[4731]: E1129 07:07:05.806450 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.806721 4731 scope.go:117] "RemoveContainer" containerID="ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.836222 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.836638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.836656 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.836681 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.836700 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.940138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.940190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.940201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.940258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:05 crc kubenswrapper[4731]: I1129 07:07:05.940272 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:05Z","lastTransitionTime":"2025-11-29T07:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.042678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.042712 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.042722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.042737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.042747 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.146155 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.146214 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.146228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.146246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.146260 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.249544 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.249625 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.249639 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.249659 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.249679 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.344648 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/1.log" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.348499 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.349030 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.351849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.351906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.351920 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.351940 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.351951 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.365426 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.376882 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.390915 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.407472 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.422990 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.444795 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.455150 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.455189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.455199 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.455216 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.455226 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.468866 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.490343 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.509128 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.523116 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.538132 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.559479 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"penshift-monitoring/cluster-monitoring-operator for network=default\\\\nI1129 07:06:48.080586 6132 services_controller.go:434] Service openshift-kube-scheduler-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 760c7338-f39e-4136-9d29-d6fccbd607c1 4364 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc005f6d66b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.233,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.559692 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.559740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.559750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.559836 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.560017 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.573928 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.588212 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.604030 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.621495 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.638197 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.663018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.663094 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.663108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.663129 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.663144 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.766378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.766428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.766440 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.766464 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.766478 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.806014 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:06 crc kubenswrapper[4731]: E1129 07:07:06.806187 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.868873 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.868934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.868948 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.868971 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.868989 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.972163 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.972231 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.972246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.972310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:06 crc kubenswrapper[4731]: I1129 07:07:06.972328 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:06Z","lastTransitionTime":"2025-11-29T07:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.075711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.075767 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.075779 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.075797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.075809 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.179311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.179447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.179516 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.179538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.179551 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.282027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.282085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.282095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.282116 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.282127 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.353726 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/2.log" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.354513 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/1.log" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.357787 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31" exitCode=1 Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.357874 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.357949 4731 scope.go:117] "RemoveContainer" containerID="ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.358802 4731 scope.go:117] "RemoveContainer" containerID="531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31" Nov 29 07:07:07 crc kubenswrapper[4731]: E1129 07:07:07.359019 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.379254 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.385964 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.386034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.386049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.386072 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.386084 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.393646 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.410965 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.427885 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.444848 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.457221 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.474611 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.489223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.489288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.489304 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.489330 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.489349 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.491736 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.510409 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.527064 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.539924 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.551482 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.570228 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab64db6427b4ff60f30e4a0358c77770dd407df7ce1ba82f898a50bb5b1b16ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"penshift-monitoring/cluster-monitoring-operator for network=default\\\\nI1129 07:06:48.080586 6132 services_controller.go:434] Service openshift-kube-scheduler-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 760c7338-f39e-4136-9d29-d6fccbd607c1 4364 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc005f6d66b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.233,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.581977 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.592700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.592742 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.592758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.592779 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.592792 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.597098 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.612685 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.624918 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.695197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.695249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.695258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.695276 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.695288 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.798172 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.798236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.798249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.798267 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.798280 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.806822 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.806886 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.806822 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:07 crc kubenswrapper[4731]: E1129 07:07:07.806971 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:07 crc kubenswrapper[4731]: E1129 07:07:07.807112 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:07 crc kubenswrapper[4731]: E1129 07:07:07.807196 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.901721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.901792 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.901813 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.901836 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:07 crc kubenswrapper[4731]: I1129 07:07:07.901853 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:07Z","lastTransitionTime":"2025-11-29T07:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.004233 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.004398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.004412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.004433 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.004447 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.107522 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.107605 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.107617 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.107641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.107658 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.210839 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.210919 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.210950 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.210971 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.210988 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.314932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.314979 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.314990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.315008 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.315021 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.366086 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/2.log" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.369422 4731 scope.go:117] "RemoveContainer" containerID="531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31" Nov 29 07:07:08 crc kubenswrapper[4731]: E1129 07:07:08.369613 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.389083 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.406677 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.417410 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.417486 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.417501 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.417539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.417556 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.422506 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.438203 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.457322 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.473339 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.489895 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.503415 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.521294 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.521352 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.521366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.521384 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.521396 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.526782 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.542445 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.558201 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.582241 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.598990 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.614119 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.623709 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.623746 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.623755 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.623769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.623778 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.631715 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.648950 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.667168 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:08Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.726790 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.726841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.726854 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.726874 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.726889 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.806208 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:08 crc kubenswrapper[4731]: E1129 07:07:08.806667 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.821225 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.829649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.829719 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.829738 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.829762 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.829778 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.932921 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.932968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.932976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.932999 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:08 crc kubenswrapper[4731]: I1129 07:07:08.933010 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:08Z","lastTransitionTime":"2025-11-29T07:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.035887 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.035936 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.035949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.035970 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.035984 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.139408 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.139714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.139849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.139983 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.140081 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.243818 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.243975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.244095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.244150 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.244182 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.349318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.349374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.349394 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.349414 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.349427 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.452605 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.452661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.452676 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.452699 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.452714 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.555068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.555104 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.555113 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.555131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.555142 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.658738 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.658788 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.658801 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.658819 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.658833 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.761735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.761768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.761778 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.761796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.761808 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.806117 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.806117 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:09 crc kubenswrapper[4731]: E1129 07:07:09.806289 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.806137 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:09 crc kubenswrapper[4731]: E1129 07:07:09.806510 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:09 crc kubenswrapper[4731]: E1129 07:07:09.806547 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.865684 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.865736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.865753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.865774 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.865790 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.969148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.969213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.969224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.969247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:09 crc kubenswrapper[4731]: I1129 07:07:09.969258 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:09Z","lastTransitionTime":"2025-11-29T07:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.072711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.072752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.072763 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.072787 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.072807 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.175433 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.175506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.175521 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.175556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.175669 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.278317 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.278360 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.278370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.278386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.278396 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.382047 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.382098 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.382109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.382129 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.382142 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.484949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.485000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.485011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.485030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.485053 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.588967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.589048 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.589065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.589103 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.589122 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.692411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.692469 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.692482 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.692505 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.692518 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.795289 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.795335 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.795349 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.795369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.795390 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.806698 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:10 crc kubenswrapper[4731]: E1129 07:07:10.806863 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.897979 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.898047 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.898056 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.898076 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:10 crc kubenswrapper[4731]: I1129 07:07:10.898086 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:10Z","lastTransitionTime":"2025-11-29T07:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.000984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.001027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.001038 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.001059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.001072 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.104762 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.104814 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.104847 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.104865 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.104879 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.208179 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.208234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.208245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.208267 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.208279 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.310934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.310974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.310983 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.311001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.311011 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.413654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.413985 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.414119 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.414197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.414264 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.517412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.517751 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.517874 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.517992 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.518133 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.621791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.621844 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.621857 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.621876 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.621890 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.724998 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.725059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.725072 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.725096 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.725110 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.806807 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.806872 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.806928 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:11 crc kubenswrapper[4731]: E1129 07:07:11.806988 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:11 crc kubenswrapper[4731]: E1129 07:07:11.807061 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:11 crc kubenswrapper[4731]: E1129 07:07:11.807228 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.823199 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.828067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.828190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.828254 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.828359 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.828430 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.837885 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.852346 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.868607 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.887221 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.909077 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.922468 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.930627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.930769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.930868 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.930933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.930988 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:11Z","lastTransitionTime":"2025-11-29T07:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.941270 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.953875 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.965796 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.978947 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:11 crc kubenswrapper[4731]: I1129 07:07:11.992299 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:11Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.008443 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:12Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.021653 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:12Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.033924 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.034027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.034106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.034177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.034238 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.039440 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:12Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.064862 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:12Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.077871 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:12Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.090918 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:12Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.138033 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.138079 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.138090 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.138109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.138119 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.241149 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.241646 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.241732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.241833 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.241930 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.345141 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.345948 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.346092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.346209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.346296 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.448975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.449080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.449090 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.449107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.449120 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.552235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.552285 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.552297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.552315 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.552327 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.655549 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.655647 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.655665 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.655690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.655708 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.758969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.759008 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.759020 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.759035 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.759049 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.806607 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:12 crc kubenswrapper[4731]: E1129 07:07:12.806830 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.862295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.862350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.862362 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.862378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.862388 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.965967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.966015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.966026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.966045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:12 crc kubenswrapper[4731]: I1129 07:07:12.966057 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:12Z","lastTransitionTime":"2025-11-29T07:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.069197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.069238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.069248 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.069265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.069275 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.106514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.106613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.106629 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.106652 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.106664 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.122684 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.128039 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.128095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.128110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.128133 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.128147 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.144564 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.150263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.150330 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.150344 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.150372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.150388 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.167950 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.173232 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.173302 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.173318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.173342 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.173366 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.188682 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.194258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.194351 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.194367 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.194387 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.194399 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.210399 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:13Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.210666 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.212862 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.212906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.212917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.212935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.212951 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.316051 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.316120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.316132 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.316154 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.316169 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.418895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.418938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.418953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.418971 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.418984 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.522250 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.522313 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.522327 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.522345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.522358 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.626260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.626328 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.626340 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.626361 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.626375 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.729837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.729924 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.729937 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.730000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.730013 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.806931 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.806975 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.807831 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.807035 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.808075 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:13 crc kubenswrapper[4731]: E1129 07:07:13.807998 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.833172 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.833254 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.833267 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.833287 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.833300 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.936066 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.936123 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.936136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.936157 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:13 crc kubenswrapper[4731]: I1129 07:07:13.936168 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:13Z","lastTransitionTime":"2025-11-29T07:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.039268 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.039326 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.039336 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.039362 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.039376 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.143111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.143190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.143208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.143234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.143250 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.246156 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.246213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.246227 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.246249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.246269 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.349488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.349587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.349601 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.349628 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.349648 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.452345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.452389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.452399 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.452416 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.452429 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.555548 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.555640 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.555651 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.555672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.555683 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.658437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.658486 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.658559 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.658602 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.658616 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.761968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.762041 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.762064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.762089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.762101 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.806230 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:14 crc kubenswrapper[4731]: E1129 07:07:14.806428 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.865737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.865793 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.865809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.865827 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.865841 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.968915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.968961 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.968975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.968994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:14 crc kubenswrapper[4731]: I1129 07:07:14.969008 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:14Z","lastTransitionTime":"2025-11-29T07:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.072025 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.072074 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.072085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.072103 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.072119 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.174353 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.174389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.174398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.174413 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.174424 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.277160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.277209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.277221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.277244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.277261 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.382151 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.382227 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.382245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.382266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.382278 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.484888 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.484932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.484947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.484966 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.484978 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.587563 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.587623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.587639 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.587660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.587671 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.689957 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.690031 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.690044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.690065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.690085 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.793187 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.793256 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.793266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.793285 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.793297 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.806416 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.806537 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:15 crc kubenswrapper[4731]: E1129 07:07:15.806550 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.806705 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:15 crc kubenswrapper[4731]: E1129 07:07:15.807014 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:15 crc kubenswrapper[4731]: E1129 07:07:15.806916 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.896407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.896463 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.896475 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.896496 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.896508 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.999661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.999713 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.999722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:15 crc kubenswrapper[4731]: I1129 07:07:15.999739 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:15.999750 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:15Z","lastTransitionTime":"2025-11-29T07:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.104546 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.104627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.104641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.104663 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.104680 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.208040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.208084 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.208097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.208116 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.208128 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.310375 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.310417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.310428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.310447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.310466 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.413426 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.413502 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.413519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.413541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.413555 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.516039 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.516081 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.516092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.516109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.516121 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.619023 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.619091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.619100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.619119 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.619134 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.722994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.723059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.723077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.723108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.723128 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.806024 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:16 crc kubenswrapper[4731]: E1129 07:07:16.806220 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.826100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.826223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.826262 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.826301 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.826328 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.929464 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.929533 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.929545 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.929586 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:16 crc kubenswrapper[4731]: I1129 07:07:16.929598 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:16Z","lastTransitionTime":"2025-11-29T07:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.032360 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.032425 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.032436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.032453 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.032462 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.135778 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.135844 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.135857 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.135878 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.135891 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.239657 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.239712 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.239728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.239796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.239815 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.342998 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.343046 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.343058 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.343074 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.343091 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.445514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.445583 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.445597 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.445613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.445624 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.548942 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.548993 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.549002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.549019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.549031 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.652598 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.652643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.652653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.652673 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.652686 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.755244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.755286 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.755296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.755313 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.755325 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.806239 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.806301 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:17 crc kubenswrapper[4731]: E1129 07:07:17.806416 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.806507 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:17 crc kubenswrapper[4731]: E1129 07:07:17.806666 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:17 crc kubenswrapper[4731]: E1129 07:07:17.806871 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.858329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.858391 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.858404 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.858428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.858443 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.961162 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.961213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.961225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.961245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:17 crc kubenswrapper[4731]: I1129 07:07:17.961257 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:17Z","lastTransitionTime":"2025-11-29T07:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.064667 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.064731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.064752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.064777 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.064795 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.168331 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.168409 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.168435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.168460 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.168481 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.272016 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.272063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.272077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.272097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.272113 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.375494 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.375554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.375595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.375617 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.375630 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.478147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.478192 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.478202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.478223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.478236 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.582054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.582112 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.582127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.582151 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.582168 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.685527 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.685607 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.685616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.685631 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.685642 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.788369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.788432 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.788441 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.788460 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.788473 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.807257 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:18 crc kubenswrapper[4731]: E1129 07:07:18.807425 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.808111 4731 scope.go:117] "RemoveContainer" containerID="531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31" Nov 29 07:07:18 crc kubenswrapper[4731]: E1129 07:07:18.808345 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.892018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.892085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.892098 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.892120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.892133 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.995296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.995338 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.995348 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.995362 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:18 crc kubenswrapper[4731]: I1129 07:07:18.995375 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:18Z","lastTransitionTime":"2025-11-29T07:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.098430 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.098484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.098495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.098513 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.098528 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.201900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.201964 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.201981 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.202002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.202016 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.304954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.304995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.305003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.305022 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.305032 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.407084 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.407171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.407191 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.407221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.407239 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.510157 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.510210 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.510221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.510239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.510248 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.613751 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.613817 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.613830 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.613858 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.613878 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.717270 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.717378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.717392 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.717414 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.717430 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.805961 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.806051 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:19 crc kubenswrapper[4731]: E1129 07:07:19.806166 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.806058 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:19 crc kubenswrapper[4731]: E1129 07:07:19.806241 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:19 crc kubenswrapper[4731]: E1129 07:07:19.806390 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.820107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.820171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.820187 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.820206 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.820219 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.922825 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.922885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.922895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.922913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:19 crc kubenswrapper[4731]: I1129 07:07:19.922927 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:19Z","lastTransitionTime":"2025-11-29T07:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.026422 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.026477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.026488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.026510 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.026524 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.129157 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.129198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.129218 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.129240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.129249 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.232640 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.232705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.232722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.232747 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.232765 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.336498 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.336557 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.336594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.336613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.336625 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.440634 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.440688 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.440701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.440721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.440735 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.544663 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.544724 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.544735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.544753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.544765 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.648009 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.648370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.648489 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.648607 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.648703 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.669052 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:20 crc kubenswrapper[4731]: E1129 07:07:20.669228 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:07:20 crc kubenswrapper[4731]: E1129 07:07:20.669715 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs podName:944440c1-51b2-4c49-b5fd-4c024fc33ace nodeName:}" failed. No retries permitted until 2025-11-29 07:07:52.669692006 +0000 UTC m=+111.560053109 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs") pod "network-metrics-daemon-2pp9l" (UID: "944440c1-51b2-4c49-b5fd-4c024fc33ace") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.752267 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.752310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.752318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.752335 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.752344 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.806302 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:20 crc kubenswrapper[4731]: E1129 07:07:20.806510 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.854917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.855327 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.855431 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.855519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.855635 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.958420 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.958467 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.958480 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.958497 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:20 crc kubenswrapper[4731]: I1129 07:07:20.958509 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:20Z","lastTransitionTime":"2025-11-29T07:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.061167 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.061258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.061274 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.061293 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.061306 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.165018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.165101 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.165117 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.165143 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.165164 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.268454 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.268513 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.268526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.268545 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.268555 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.371381 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.371435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.371448 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.371470 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.371483 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.474742 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.474812 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.474831 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.474864 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.474886 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.577466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.577538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.577553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.577590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.577605 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.680452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.680511 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.680524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.680541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.680557 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.782813 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.782867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.782876 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.782892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.782901 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.806461 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.806660 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:21 crc kubenswrapper[4731]: E1129 07:07:21.806747 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:21 crc kubenswrapper[4731]: E1129 07:07:21.806651 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.806505 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:21 crc kubenswrapper[4731]: E1129 07:07:21.807240 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.822160 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.834091 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.850878 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.868494 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.879837 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.886105 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.886164 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.886174 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.886192 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.886206 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.897200 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.912705 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.926519 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.942129 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.958746 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.973014 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.986310 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.988496 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.988544 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.988553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.988585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:21 crc kubenswrapper[4731]: I1129 07:07:21.988596 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:21Z","lastTransitionTime":"2025-11-29T07:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.011166 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.025020 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.040026 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.054610 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.071203 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.096291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.096687 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.096821 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.096925 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.096948 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.099247 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.199770 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.200173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.200266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.200385 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.200503 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.303652 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.303702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.303715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.303731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.303744 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.407284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.407645 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.407807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.407953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.408049 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.510846 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.510897 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.510909 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.510928 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.510944 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.614019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.614076 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.614086 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.614106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.614117 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.716826 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.716878 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.716889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.716910 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.716921 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.806534 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:22 crc kubenswrapper[4731]: E1129 07:07:22.806995 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.819338 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.819393 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.819407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.819425 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.819439 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.921621 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.921664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.921674 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.921692 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:22 crc kubenswrapper[4731]: I1129 07:07:22.921704 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:22Z","lastTransitionTime":"2025-11-29T07:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.023990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.024030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.024040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.024056 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.024067 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.127062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.127109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.127133 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.127156 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.127170 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.229539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.229615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.229629 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.229648 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.229660 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.332441 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.332502 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.332514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.332537 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.332558 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.435534 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.435616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.435630 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.435649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.435674 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.538369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.538737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.538844 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.538927 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.538999 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.579043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.579079 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.579088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.579104 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.579117 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.590866 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.594802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.594841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.594852 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.594870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.594882 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.608427 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.611948 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.611991 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.612002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.612019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.612034 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.623804 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.627548 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.627611 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.627625 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.627640 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.627651 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.641965 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.650887 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.650972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.651015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.651049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.651067 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.669241 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:23Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.669799 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.671854 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.671981 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.672061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.672140 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.672214 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.775415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.775815 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.775971 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.776068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.776136 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.806490 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.806588 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.806808 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.806912 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.807052 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:23 crc kubenswrapper[4731]: E1129 07:07:23.807242 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.879196 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.879238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.879248 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.879264 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.879275 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.982181 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.982221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.982230 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.982245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:23 crc kubenswrapper[4731]: I1129 07:07:23.982256 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:23Z","lastTransitionTime":"2025-11-29T07:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.085447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.085509 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.085524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.085543 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.085556 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.189531 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.189605 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.189617 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.189636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.189653 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.293366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.293412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.293426 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.293448 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.293463 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.395946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.396000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.396010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.396027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.396036 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.498589 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.498637 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.498650 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.498668 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.498680 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.601031 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.601084 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.601094 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.601111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.601122 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.704375 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.704459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.704471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.704488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.704499 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.806131 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:24 crc kubenswrapper[4731]: E1129 07:07:24.806346 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.808155 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.808205 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.808219 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.808236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.808248 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.911645 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.911696 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.911709 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.911727 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:24 crc kubenswrapper[4731]: I1129 07:07:24.911738 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:24Z","lastTransitionTime":"2025-11-29T07:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.014491 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.014535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.014546 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.014576 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.014589 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.117858 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.117919 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.117930 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.117955 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.117967 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.220261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.220351 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.220369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.220392 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.220410 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.323654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.323708 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.323744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.323766 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.323781 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.426547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.426604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.426615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.426629 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.426638 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.429190 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/0.log" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.429225 4731 generic.go:334] "Generic (PLEG): container finished" podID="5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8" containerID="4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5" exitCode=1 Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.429254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5rsbt" event={"ID":"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8","Type":"ContainerDied","Data":"4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.429634 4731 scope.go:117] "RemoveContainer" containerID="4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.447095 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.462172 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.475060 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.498055 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.510984 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.523592 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.528862 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.528891 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.528900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.528916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.528926 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.537503 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.552487 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.567114 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.582510 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.599249 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.610138 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.624614 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:24Z\\\",\\\"message\\\":\\\"2025-11-29T07:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a\\\\n2025-11-29T07:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a to /host/opt/cni/bin/\\\\n2025-11-29T07:06:39Z [verbose] multus-daemon started\\\\n2025-11-29T07:06:39Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.631456 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.631505 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.631516 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.631535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.631548 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.638781 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.648595 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.657335 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.670388 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.684217 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:25Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.734176 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.734257 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.734287 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.734310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.734322 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.805915 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:25 crc kubenswrapper[4731]: E1129 07:07:25.806119 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.806398 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:25 crc kubenswrapper[4731]: E1129 07:07:25.806470 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.806883 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:25 crc kubenswrapper[4731]: E1129 07:07:25.807061 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.837623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.837671 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.837684 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.837702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.837717 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.940758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.940830 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.940847 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.940869 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:25 crc kubenswrapper[4731]: I1129 07:07:25.940882 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:25Z","lastTransitionTime":"2025-11-29T07:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.044121 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.044165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.044175 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.044193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.044205 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.147285 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.147354 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.147366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.147391 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.147406 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.250886 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.250931 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.250944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.250964 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.250978 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.354877 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.354944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.354957 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.354977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.354989 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.435219 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/0.log" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.435284 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5rsbt" event={"ID":"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8","Type":"ContainerStarted","Data":"bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.451357 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.457650 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.457715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.457730 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.457753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.457768 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.466630 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.481145 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.495959 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.512184 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.530337 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.548698 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.561157 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.561251 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.561272 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.561303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.561328 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.580589 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.600154 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.614856 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.628036 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.644395 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.659408 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.664600 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.664649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.664662 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.664682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.664694 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.676072 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.686485 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.698275 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:24Z\\\",\\\"message\\\":\\\"2025-11-29T07:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a\\\\n2025-11-29T07:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a to /host/opt/cni/bin/\\\\n2025-11-29T07:06:39Z [verbose] multus-daemon started\\\\n2025-11-29T07:06:39Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.715108 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.728509 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:26Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.768177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.768229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.768240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.768261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.768273 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.806533 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:26 crc kubenswrapper[4731]: E1129 07:07:26.806762 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.870500 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.870547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.870578 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.870595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.870607 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.973963 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.974011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.974023 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.974042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:26 crc kubenswrapper[4731]: I1129 07:07:26.974054 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:26Z","lastTransitionTime":"2025-11-29T07:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.076174 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.076219 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.076229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.076244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.076254 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.179978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.180030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.180042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.180061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.180075 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.283056 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.283130 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.283146 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.283168 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.283181 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.387068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.387130 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.387145 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.387168 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.387182 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.490602 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.490648 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.490659 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.490679 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.490691 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.593341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.593389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.593402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.593423 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.593440 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.696761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.696847 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.696867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.696896 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.696917 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.800215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.800267 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.800279 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.800297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.800314 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.806773 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.806865 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.806944 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:27 crc kubenswrapper[4731]: E1129 07:07:27.806990 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:27 crc kubenswrapper[4731]: E1129 07:07:27.807208 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:27 crc kubenswrapper[4731]: E1129 07:07:27.807263 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.903775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.903863 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.903877 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.903904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:27 crc kubenswrapper[4731]: I1129 07:07:27.903923 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:27Z","lastTransitionTime":"2025-11-29T07:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.007163 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.007220 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.007235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.007253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.007267 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.110757 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.110808 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.110817 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.110834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.110875 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.214334 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.214383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.214397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.214415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.214429 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.317295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.317353 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.317367 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.317390 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.317406 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.420743 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.420850 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.420870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.420903 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.420915 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.524311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.524368 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.524379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.524399 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.524412 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.626916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.626988 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.627001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.627019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.627033 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.730586 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.730641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.730654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.730674 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.730688 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.806664 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:28 crc kubenswrapper[4731]: E1129 07:07:28.806884 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.834118 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.834200 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.834219 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.834252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.834275 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.937139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.937190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.937203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.937221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:28 crc kubenswrapper[4731]: I1129 07:07:28.937234 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:28Z","lastTransitionTime":"2025-11-29T07:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.040743 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.040798 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.040808 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.040827 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.040841 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.143436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.143503 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.143518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.143540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.143555 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.245879 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.245934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.245946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.245969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.245982 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.348976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.349023 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.349033 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.349049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.349062 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.450827 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.450867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.450876 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.450891 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.450902 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.553005 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.553056 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.553065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.553081 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.553092 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.656284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.656401 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.656427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.656453 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.656464 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.671909 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:07:29 crc kubenswrapper[4731]: E1129 07:07:29.672196 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:33.67216442 +0000 UTC m=+152.562525523 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.759009 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.759080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.759090 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.759110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.759123 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.806258 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.806352 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.806409 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:29 crc kubenswrapper[4731]: E1129 07:07:29.806504 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:29 crc kubenswrapper[4731]: E1129 07:07:29.806717 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:29 crc kubenswrapper[4731]: E1129 07:07:29.807169 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.807542 4731 scope.go:117] "RemoveContainer" containerID="531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.861790 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.861839 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.861853 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.861872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.861885 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.964660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.964713 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.964736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.964754 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:29 crc kubenswrapper[4731]: I1129 07:07:29.964765 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:29Z","lastTransitionTime":"2025-11-29T07:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.068158 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.068207 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.068217 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.068236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.068247 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.171677 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.171737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.171753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.171782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.171798 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.178385 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.178483 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.178665 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.178697 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.178843 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:08:34.178811477 +0000 UTC m=+153.069172750 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.178948 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:08:34.17891815 +0000 UTC m=+153.069279253 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.274687 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.275054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.275068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.275088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.275102 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.279373 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.279423 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.279624 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.279620 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.279646 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.279664 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.279666 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.279681 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.279738 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:08:34.279718891 +0000 UTC m=+153.170079994 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.279758 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:08:34.279749362 +0000 UTC m=+153.170110465 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.377591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.377931 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.378015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.378054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.378073 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.460724 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/2.log" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.472651 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.473926 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.481488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.481514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.481523 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.481539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.481547 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.492617 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.519230 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.531679 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.547905 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:24Z\\\",\\\"message\\\":\\\"2025-11-29T07:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a\\\\n2025-11-29T07:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a to /host/opt/cni/bin/\\\\n2025-11-29T07:06:39Z [verbose] multus-daemon started\\\\n2025-11-29T07:06:39Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.567245 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.582040 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.584189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.584219 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.584230 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.584246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.584258 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.595360 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.608060 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.623600 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.638920 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.657506 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.674182 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.687243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.687315 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.687327 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.687348 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.687365 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.691426 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.714385 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.727590 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.744584 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.761152 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.776468 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:30Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.789519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.789555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.789586 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.789606 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.789617 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.806015 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:30 crc kubenswrapper[4731]: E1129 07:07:30.806152 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.893431 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.893488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.893499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.893518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.893529 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.996211 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.996260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.996271 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.996291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:30 crc kubenswrapper[4731]: I1129 07:07:30.996308 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:30Z","lastTransitionTime":"2025-11-29T07:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.099358 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.099435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.099447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.099472 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.099488 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.202685 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.202735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.202747 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.202769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.202783 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.305918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.305993 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.306005 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.306026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.306040 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.409274 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.409343 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.409357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.409379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.409394 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.479285 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/3.log" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.480084 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/2.log" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.483293 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" exitCode=1 Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.483365 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.483422 4731 scope.go:117] "RemoveContainer" containerID="531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.484195 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:07:31 crc kubenswrapper[4731]: E1129 07:07:31.484866 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.504269 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.512790 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.512860 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.512874 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.512946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.512970 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.520221 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.534476 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.550153 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.565030 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.577650 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.592331 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:24Z\\\",\\\"message\\\":\\\"2025-11-29T07:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a\\\\n2025-11-29T07:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a to /host/opt/cni/bin/\\\\n2025-11-29T07:06:39Z [verbose] multus-daemon started\\\\n2025-11-29T07:06:39Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.610890 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.615755 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.615791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.615802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.615818 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.615832 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.626046 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.640446 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.655803 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.670915 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.686448 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.699524 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.712649 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.719302 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.719342 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.719352 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.719368 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.719381 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.740070 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:30Z\\\",\\\"message\\\":\\\":07:30.851855 6714 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1129 07:07:30.851899 6714 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:07:30.851906 6714 factory.go:656] Stopping watch factory\\\\nI1129 07:07:30.851908 6714 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:07:30.851923 6714 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:30.851912 6714 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-2pp9l]\\\\nI1129 07:07:30.851939 6714 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1129 07:07:30.851984 6714 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-2pp9l before timer (time: 2025-11-29 07:07:31.673997297 +0000 UTC m=+1.581975825): skip\\\\nI1129 07:07:30.852015 6714 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:30.852024 6714 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 114.243µs)\\\\nI1129 07:07:30.852003 6714 handler.go:208] Removed *v1.Node event handler 2\\\\nF1129 07:07:30.852102 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.755965 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.769759 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.806300 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.806388 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.806314 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:31 crc kubenswrapper[4731]: E1129 07:07:31.806486 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:31 crc kubenswrapper[4731]: E1129 07:07:31.806549 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:31 crc kubenswrapper[4731]: E1129 07:07:31.806699 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.822292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.822345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.822356 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.822374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.822753 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.825220 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.841757 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.867889 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531921ff34861104985a6d3afd284f2df4d69c25a85f1a9fe0e20f15d5f61d31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:07Z\\\",\\\"message\\\":\\\"ps:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:07:06.723536 6450 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1129 07:07:06.723750 6450 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:06.723872 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:06.719311 6450 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj in node crc\\\\nF1129 07:07:06.724013 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:30Z\\\",\\\"message\\\":\\\":07:30.851855 6714 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1129 07:07:30.851899 6714 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:07:30.851906 6714 factory.go:656] Stopping watch factory\\\\nI1129 07:07:30.851908 6714 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:07:30.851923 6714 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:30.851912 6714 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-2pp9l]\\\\nI1129 07:07:30.851939 6714 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1129 07:07:30.851984 6714 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-2pp9l before timer (time: 2025-11-29 07:07:31.673997297 +0000 UTC m=+1.581975825): skip\\\\nI1129 07:07:30.852015 6714 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:30.852024 6714 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 114.243µs)\\\\nI1129 07:07:30.852003 6714 handler.go:208] Removed *v1.Node event handler 2\\\\nF1129 07:07:30.852102 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.883845 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.900909 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.919692 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.925756 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.925806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.925820 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.925841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.925854 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:31Z","lastTransitionTime":"2025-11-29T07:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.937142 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.956107 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.972085 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:31 crc kubenswrapper[4731]: I1129 07:07:31.986533 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.004651 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:24Z\\\",\\\"message\\\":\\\"2025-11-29T07:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a\\\\n2025-11-29T07:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a to /host/opt/cni/bin/\\\\n2025-11-29T07:06:39Z [verbose] multus-daemon started\\\\n2025-11-29T07:06:39Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.023237 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.028456 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.028700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.028828 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.028932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.029007 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.039261 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.055398 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.073728 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.088417 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.103239 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.121842 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.136109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.136153 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.136169 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.136186 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.136201 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.239229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.239298 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.239316 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.239341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.239360 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.342903 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.342962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.342976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.342998 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.343014 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.446180 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.446239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.446252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.446278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.446289 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.491090 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/3.log" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.497355 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:07:32 crc kubenswrapper[4731]: E1129 07:07:32.497609 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.515683 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.533898 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.550250 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.550293 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.550347 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.550370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.550389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.550403 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.564016 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.581830 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:24Z\\\",\\\"message\\\":\\\"2025-11-29T07:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a\\\\n2025-11-29T07:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a to /host/opt/cni/bin/\\\\n2025-11-29T07:06:39Z [verbose] multus-daemon started\\\\n2025-11-29T07:06:39Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.598649 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.612393 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.627666 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.642718 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.654028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.654081 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.654095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.654115 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.654129 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.658120 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.670448 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.684991 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.700840 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.714674 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.734139 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:30Z\\\",\\\"message\\\":\\\":07:30.851855 6714 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1129 07:07:30.851899 6714 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:07:30.851906 6714 factory.go:656] Stopping watch factory\\\\nI1129 07:07:30.851908 6714 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:07:30.851923 6714 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:30.851912 6714 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-2pp9l]\\\\nI1129 07:07:30.851939 6714 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1129 07:07:30.851984 6714 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-2pp9l before timer (time: 2025-11-29 07:07:31.673997297 +0000 UTC m=+1.581975825): skip\\\\nI1129 07:07:30.852015 6714 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:30.852024 6714 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 114.243µs)\\\\nI1129 07:07:30.852003 6714 handler.go:208] Removed *v1.Node event handler 2\\\\nF1129 07:07:30.852102 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.750711 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.756524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.756592 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.756605 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.756624 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.756639 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.764859 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.782194 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.806152 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:32 crc kubenswrapper[4731]: E1129 07:07:32.806385 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.859892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.859944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.859956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.859975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.859988 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.963229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.963283 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.963297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.963314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:32 crc kubenswrapper[4731]: I1129 07:07:32.963326 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:32Z","lastTransitionTime":"2025-11-29T07:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.066165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.066503 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.066701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.066825 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.066892 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.169905 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.169964 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.169977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.170001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.170016 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.273366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.273443 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.273459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.273495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.273513 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.377507 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.377584 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.377597 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.377618 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.377636 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.480634 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.480690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.480706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.480733 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.480750 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.583668 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.583722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.583732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.583749 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.583759 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.686266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.686318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.686331 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.686348 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.686358 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.788995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.789052 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.789061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.789080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.789091 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.807129 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.807154 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.807305 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.807199 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.807387 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.807676 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.823917 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.877169 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.877234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.877252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.877276 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.877289 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.891323 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.895478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.895521 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.895535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.895556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.895583 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.908480 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.912582 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.912635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.912645 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.912663 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.912674 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.924449 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.927786 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.927819 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.927830 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.927846 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.927857 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.940300 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.944170 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.944212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.944220 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.944236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.944247 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.957996 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:33 crc kubenswrapper[4731]: E1129 07:07:33.958132 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.959834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.959869 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.959883 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.959901 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:33 crc kubenswrapper[4731]: I1129 07:07:33.959912 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:33Z","lastTransitionTime":"2025-11-29T07:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.062956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.063010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.063022 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.063044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.063057 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.166237 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.166295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.166305 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.166323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.166334 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.270181 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.270246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.270260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.270283 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.270298 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.373415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.373488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.373504 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.373531 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.373546 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.477003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.477073 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.477085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.477107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.477122 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.580182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.580238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.580258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.580278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.580289 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.683588 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.683637 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.683649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.683667 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.683679 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.787037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.787097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.787110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.787131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.787149 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.806716 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:34 crc kubenswrapper[4731]: E1129 07:07:34.806894 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.890237 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.890288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.890299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.890318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.890329 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.993145 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.993195 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.993209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.993229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:34 crc kubenswrapper[4731]: I1129 07:07:34.993241 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:34Z","lastTransitionTime":"2025-11-29T07:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.095977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.096025 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.096037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.096057 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.096069 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.199315 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.199385 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.199405 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.199429 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.199443 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.302264 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.302329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.302342 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.302365 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.302385 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.406146 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.406205 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.406221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.406246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.406261 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.508642 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.508698 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.508708 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.508725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.508739 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.611667 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.611708 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.611716 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.611731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.611740 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.716660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.716718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.716733 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.716753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.716767 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.806307 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.806355 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:35 crc kubenswrapper[4731]: E1129 07:07:35.806479 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.806525 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:35 crc kubenswrapper[4731]: E1129 07:07:35.806680 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:35 crc kubenswrapper[4731]: E1129 07:07:35.806740 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.819411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.819456 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.819469 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.819488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.819501 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.923303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.923396 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.923411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.923437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:35 crc kubenswrapper[4731]: I1129 07:07:35.923456 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:35Z","lastTransitionTime":"2025-11-29T07:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.026407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.026470 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.026479 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.026498 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.026510 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.129696 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.129787 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.129802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.129828 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.129841 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.232967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.233021 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.233034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.233053 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.233067 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.336230 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.336275 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.336292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.336311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.336330 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.439788 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.439841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.439851 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.439868 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.439879 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.542277 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.542350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.542360 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.542414 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.542434 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.645239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.645298 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.645309 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.645332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.645348 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.747986 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.748037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.748051 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.748069 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.748085 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.806184 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:36 crc kubenswrapper[4731]: E1129 07:07:36.806403 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.851239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.851303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.851318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.851338 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.851353 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.954476 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.954520 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.954531 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.954548 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:36 crc kubenswrapper[4731]: I1129 07:07:36.954577 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:36Z","lastTransitionTime":"2025-11-29T07:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.057766 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.058054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.058136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.058207 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.058269 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.160934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.160986 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.161000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.161018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.161030 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.264499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.264544 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.264558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.264597 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.264615 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.368388 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.368466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.368480 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.368517 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.368534 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.471941 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.472017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.472036 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.472063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.472102 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.577197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.577756 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.577928 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.578107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.578292 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.681765 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.682545 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.682806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.682961 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.683135 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.785973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.786287 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.786367 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.786442 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.786501 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.805986 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.806004 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.805986 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:37 crc kubenswrapper[4731]: E1129 07:07:37.806690 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:37 crc kubenswrapper[4731]: E1129 07:07:37.806799 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:37 crc kubenswrapper[4731]: E1129 07:07:37.806904 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.901873 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.901920 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.901932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.901951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:37 crc kubenswrapper[4731]: I1129 07:07:37.901963 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:37Z","lastTransitionTime":"2025-11-29T07:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.005615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.005668 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.005683 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.005702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.006017 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.109646 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.109969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.110055 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.110153 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.110230 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.212751 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.212802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.212815 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.212832 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.212844 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.317059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.317379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.317540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.317907 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.318102 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.421236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.421284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.421295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.421318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.421334 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.524165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.524224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.524235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.524252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.524264 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.627590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.627634 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.627646 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.627662 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.627674 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.730757 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.730824 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.730841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.730869 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.730886 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.806663 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:38 crc kubenswrapper[4731]: E1129 07:07:38.806803 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.834599 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.834673 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.834688 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.834710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.834723 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.938075 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.938171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.938185 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.938203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:38 crc kubenswrapper[4731]: I1129 07:07:38.938216 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:38Z","lastTransitionTime":"2025-11-29T07:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.041644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.041765 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.042124 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.042213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.042245 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.144786 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.145077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.145214 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.145317 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.145389 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.249544 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.249630 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.249645 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.249670 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.249685 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.352860 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.352913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.352925 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.352947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.352962 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.456153 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.456457 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.456590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.456708 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.456814 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.560249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.560297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.560311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.560332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.560347 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.664111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.664422 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.664491 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.664581 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.664652 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.768870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.768932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.768948 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.768975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.768987 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.806483 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:39 crc kubenswrapper[4731]: E1129 07:07:39.806692 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.806483 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:39 crc kubenswrapper[4731]: E1129 07:07:39.806924 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.806692 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:39 crc kubenswrapper[4731]: E1129 07:07:39.807077 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.872442 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.872500 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.872509 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.872529 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.872544 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.974912 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.974965 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.974976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.974996 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:39 crc kubenswrapper[4731]: I1129 07:07:39.975012 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:39Z","lastTransitionTime":"2025-11-29T07:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.078321 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.078396 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.078407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.078428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.078450 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.182124 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.182178 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.182189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.182209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.182226 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.285998 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.286052 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.286062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.286081 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.286095 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.389473 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.389555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.389596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.389616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.389629 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.493624 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.493694 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.493708 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.493732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.493747 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.597833 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.597893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.597905 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.597928 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.597941 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.701632 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.701688 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.701698 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.701718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.701728 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.805983 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.806021 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.806067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.806084 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.806106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.806122 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:40 crc kubenswrapper[4731]: E1129 07:07:40.806206 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.909356 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.909438 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.909456 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.909478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:40 crc kubenswrapper[4731]: I1129 07:07:40.909494 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:40Z","lastTransitionTime":"2025-11-29T07:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.012970 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.013030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.013044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.013068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.013083 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.116051 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.116111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.116120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.116136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.116145 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.219197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.219816 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.220117 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.220151 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.220166 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.323618 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.323662 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.323672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.323696 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.323714 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.426715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.426774 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.426786 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.426804 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.426816 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.529135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.529205 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.529215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.529236 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.529248 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.631614 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.631691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.631701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.631720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.631731 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.734722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.734767 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.734776 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.734792 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.734802 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.806342 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:41 crc kubenswrapper[4731]: E1129 07:07:41.806643 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.806684 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:41 crc kubenswrapper[4731]: E1129 07:07:41.806848 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.807082 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:41 crc kubenswrapper[4731]: E1129 07:07:41.807311 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.824529 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.837550 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.837631 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.837644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.837668 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.837680 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.841685 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.858849 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e77534f53f3d0ba9457f1e42e081f8fa9ddeff238db875686b42dd35fca480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830f693b439b32e14b4e825fc51d03dddc4834afc7f2fb9c403ebdc706acb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.876680 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a65d85-a3f6-4c1f-8a87-799ccfb861c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8d9cccf59a8d03f8346fd5c4403c377376ed7e429a1cd7b9ea0507a491342e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e803b8c855802ae1b136acaaf383df0a2f5efccc46c8605813d43a3c7c39a700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c810e72af2e060f6378cc0809584254c09ab79d506a8d1b4ff14cc9f005d08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db06df6b97d6ad28e3d1df0b237670573e2dc84ad1e4975431d0d6fa74497617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e4e1564bb5cd4109016d9716311a4f93f63f7f4f694d1f021390c91495faab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76097df468fcbf7cbc2a4d0a4a847310bcc371812fd7601716bb946d2ba43b1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2de6a57f63f8b266688eb7d7efef0c3f0fe74a9cf9911709b8c2f6d6cd2f587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgbzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sc4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.891214 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"944440c1-51b2-4c49-b5fd-4c024fc33ace\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkwn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2pp9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.908140 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174eea76-5d67-4e10-9a17-b4efc32676b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a567cf0e6a828fe031138f9f7874d01fe6e034cb814a3c2116c4918ddab71c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5af4e6864674fc088ce1e153b29b66072dc3aae9b7c8c2c5dd3862a505dbac6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86bdecb59f656fef2662f8fc78d427a4a9816fc211eedca957f20b3103f63a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.925254 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9d7960c59f71d1f4141843ab22ff9910f442c6a92fd496e9923a11fd19578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.942465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.942508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.942518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.942535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.942547 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:41Z","lastTransitionTime":"2025-11-29T07:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.943967 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n6mtz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dc0aca-1039-4a30-a83e-48bd320d0eae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b16629d98d0b9d5ede0c57e9befc86af08f3b501ef46c1e17b38a2ba4e366a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bbjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n6mtz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.962340 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5rsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:24Z\\\",\\\"message\\\":\\\"2025-11-29T07:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a\\\\n2025-11-29T07:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7fca3a92-1282-434f-ac66-0accb2c57a4a to /host/opt/cni/bin/\\\\n2025-11-29T07:06:39Z [verbose] multus-daemon started\\\\n2025-11-29T07:06:39Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5rsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.977381 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a05e3005-6e8b-4f70-830b-e7313d4bf967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65aa9e951747e7bd3ae1dd6212a34576cd4aa03de1753d6d3f193d4c95ecead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a56ab2b9d47cd0dcd632f98defa5f4bcb711032701ea4ff28701daa43a2dca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:41 crc kubenswrapper[4731]: I1129 07:07:41.991819 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.004737 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da8cd4c4826d5a431134fff8cd78b168bfededf913cc47058413e38aadd335ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.023841 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4585c4-ac4a-4268-b25e-47509c17cfe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:07:30Z\\\",\\\"message\\\":\\\":07:30.851855 6714 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1129 07:07:30.851899 6714 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 07:07:30.851906 6714 factory.go:656] Stopping watch factory\\\\nI1129 07:07:30.851908 6714 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1129 07:07:30.851923 6714 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:07:30.851912 6714 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-2pp9l]\\\\nI1129 07:07:30.851939 6714 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1129 07:07:30.851984 6714 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-2pp9l before timer (time: 2025-11-29 07:07:31.673997297 +0000 UTC m=+1.581975825): skip\\\\nI1129 07:07:30.852015 6714 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1129 07:07:30.852024 6714 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 114.243µs)\\\\nI1129 07:07:30.852003 6714 handler.go:208] Removed *v1.Node event handler 2\\\\nF1129 07:07:30.852102 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnvzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4t5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.037330 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8tvx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719caf85-c94c-4dc2-b28f-f5c4ec29e79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d765b2fbbea1c5491ee3ac49d3d217921d28d094b6204c0c9ca942f5975f7b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cfkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8tvx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.045269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.045367 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.045379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.045403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.045416 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:42Z","lastTransitionTime":"2025-11-29T07:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.050867 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de2552c-90ca-42ab-94c0-365f2c2380d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca701ec73409c337cc55b1606c0f5def9e370c9c47b6d8f34f05e799ebc3ff36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf95f33df0c02101f10f47b6794395211997d2a9741a50b62be363fb5b96dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l44ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7d5hj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.074815 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bf79c20-5349-4e16-ba97-b4e5b4d662c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b45105b8ffd0958e82e91f4a252fd55648e7df7c3e0adaabfef3fac21b40d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://930e25b3b5d987a7655d880335384b628a769849dd65efe24d54c9478e62ff59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f69b0a37a46439ddeb91cbf2b55a55f7a357cee5a791f0313caa80273f7d974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42a8bbcb40e859357977aedd6999a512573c8d8e76789eed2d7a9e25603b292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4a87ec8979104dcc066b1336256b9940cbccd2aeb5eaba7e9046110386b43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec7888f2055543d9210eb88271cf858bb1c6c9daafcf8d5eebe5ae66b140be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cec7888f2055543d9210eb88271cf858bb1c6c9daafcf8d5eebe5ae66b140be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6323b45d0e45e84e5c419dded429001a3c2a3bfb950d1069c12a840b07c5d581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6323b45d0e45e84e5c419dded429001a3c2a3bfb950d1069c12a840b07c5d581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://084da6a5ab5f96212c90ee40656376d78e2d377cfd9a7f0b874f085fead3cbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://084da6a5ab5f96212c90ee40656376d78e2d377cfd9a7f0b874f085fead3cbfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.094942 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:06:25Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1129 07:06:25.309042 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764399966\\\\\\\\\\\\\\\" (2025-11-29 07:06:06 +0000 UTC to 2025-12-29 07:06:07 +0000 UTC (now=2025-11-29 07:06:25.308983691 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309145 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1129 07:06:25.309167 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1129 07:06:25.309190 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764399977\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764399977\\\\\\\\\\\\\\\" (2025-11-29 06:06:17 +0000 UTC to 2026-11-29 06:06:17 +0000 UTC (now=2025-11-29 07:06:25.309162626 +0000 UTC))\\\\\\\"\\\\nI1129 07:06:25.309217 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1129 07:06:25.309274 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1129 07:06:25.309309 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031981952/tls.crt::/tmp/serving-cert-1031981952/tls.key\\\\\\\"\\\\nI1129 07:06:25.309020 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1129 07:06:25.309457 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1129 07:06:25.309507 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1129 07:06:25.311335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.110946 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f17ec67c-91b4-419f-b031-38a828a552a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a859abc8925062e0b6f06edef1a87524357b5115db3c780653a4d378af6ba04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fe7e74083569ac159e34aecb62fd9a2bc89cb67c25d104efa3ecd93b71742b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e50d4319120f4c6445252762298822db75d04cad45eff91b9ee9e82335e0f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e329eab7a80200257ef2856155c442452b453c8c9cfa15f790d4688ca74573\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.127165 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2302dbb7-38db-4752-a5d0-2d055da3aec3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b12c2e84a5d5a5e4b2dfdc3a2052706d4b969ccd45719e05be8f1dbec5555cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shf4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:06:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rscr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.148816 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.148886 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.148907 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.148931 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:42 crc kubenswrapper[4731]: I1129 07:07:42.148949 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:42Z","lastTransitionTime":"2025-11-29T07:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.431001 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:43 crc kubenswrapper[4731]: E1129 07:07:43.431161 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.434646 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.434707 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.434722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.434746 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.434765 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:43Z","lastTransitionTime":"2025-11-29T07:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.436493 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.436600 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:43 crc kubenswrapper[4731]: E1129 07:07:43.436675 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.436598 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:43 crc kubenswrapper[4731]: E1129 07:07:43.436763 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:43 crc kubenswrapper[4731]: E1129 07:07:43.436884 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.537962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.538026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.538042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.538063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.538443 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:43Z","lastTransitionTime":"2025-11-29T07:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.642272 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.642332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.642345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.642365 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.642376 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:43Z","lastTransitionTime":"2025-11-29T07:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.744761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.744837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.744850 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.744868 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.744879 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:43Z","lastTransitionTime":"2025-11-29T07:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.848396 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.848519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.848535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.848589 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.848606 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:43Z","lastTransitionTime":"2025-11-29T07:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.951345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.951397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.951409 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.951427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:43 crc kubenswrapper[4731]: I1129 07:07:43.951440 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:43Z","lastTransitionTime":"2025-11-29T07:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.054100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.054177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.054198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.054223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.054237 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.157475 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.157523 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.157540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.157576 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.157588 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.260415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.260461 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.260471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.260490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.260502 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.302865 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.302932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.302943 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.302963 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.302978 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.316232 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.320714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.320756 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.320767 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.320784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.320798 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.333586 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.338978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.339043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.339055 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.339072 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.339101 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.352064 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.356539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.356615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.356626 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.356643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.356653 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.376282 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.382554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.382633 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.382651 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.382676 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.382693 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.399363 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5aaf0a18-6c01-4835-aaaa-2edfd1f90942\\\",\\\"systemUUID\\\":\\\"f3d115a6-d015-4b84-85ef-26fa0172b441\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:07:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.399608 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.402234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.402295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.402304 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.402337 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.402347 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.505591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.505653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.505670 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.505690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.505704 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.608147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.608198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.608209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.608226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.608238 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.711205 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.711259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.711269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.711294 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.711316 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.806931 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.806962 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.807105 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.807234 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.807652 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.807758 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.808057 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:07:44 crc kubenswrapper[4731]: E1129 07:07:44.808253 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.815393 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.815464 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.815485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.815528 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.815541 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.919433 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.919514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.919538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.919612 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:44 crc kubenswrapper[4731]: I1129 07:07:44.919638 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:44Z","lastTransitionTime":"2025-11-29T07:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.022959 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.023053 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.023086 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.023131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.023165 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.126374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.126416 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.126425 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.126458 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.126470 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.229184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.229234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.229247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.229266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.229278 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.332636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.332701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.332714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.332732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.332748 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.436202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.436277 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.436289 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.436310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.436324 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.540035 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.540086 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.540095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.540111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.540122 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.642884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.642925 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.642937 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.642956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.642969 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.746769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.746819 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.746835 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.746854 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.746865 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.806865 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:45 crc kubenswrapper[4731]: E1129 07:07:45.807080 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.850929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.850983 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.850994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.851014 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.851031 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.954681 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.954759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.954796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.954818 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:45 crc kubenswrapper[4731]: I1129 07:07:45.954834 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:45Z","lastTransitionTime":"2025-11-29T07:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.058932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.058994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.059015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.059041 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.059064 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.161764 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.161815 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.161834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.161857 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.161879 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.264925 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.264978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.264990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.265011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.265024 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.368060 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.368134 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.368148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.368170 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.368194 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.471042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.471086 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.471095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.471111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.471121 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.574064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.574114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.574125 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.574145 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.574158 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.677503 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.677598 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.677618 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.677638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.677652 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.780959 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.781014 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.781028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.781053 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.781067 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.806123 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.806163 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.806371 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:46 crc kubenswrapper[4731]: E1129 07:07:46.806396 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:46 crc kubenswrapper[4731]: E1129 07:07:46.806538 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:46 crc kubenswrapper[4731]: E1129 07:07:46.806795 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.886249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.886902 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.886914 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.886935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.886950 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.990401 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.990481 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.990493 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.990532 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:46 crc kubenswrapper[4731]: I1129 07:07:46.990545 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:46Z","lastTransitionTime":"2025-11-29T07:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.093834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.093888 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.093900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.093920 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.093932 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.197966 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.198017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.198028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.198045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.198059 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.301697 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.302073 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.302160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.302243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.302341 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.405933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.406288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.406371 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.406464 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.406541 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.510135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.510608 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.510845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.511078 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.511285 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.614070 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.614160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.614175 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.614199 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.614212 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.717884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.717936 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.717949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.717968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.717980 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.806111 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:47 crc kubenswrapper[4731]: E1129 07:07:47.806272 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.821201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.821251 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.821260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.821277 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.821290 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.924500 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.924550 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.924590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.924612 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:47 crc kubenswrapper[4731]: I1129 07:07:47.924625 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:47Z","lastTransitionTime":"2025-11-29T07:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.027341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.027412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.027428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.027452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.027472 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.130719 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.130772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.130785 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.130806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.130820 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.233721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.233792 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.233807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.233832 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.233851 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.337213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.337284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.337297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.337319 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.337332 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.440931 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.440990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.441003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.441021 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.441036 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.545125 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.545175 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.545186 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.545208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.545224 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.648534 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.648611 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.648623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.648638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.648650 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.752602 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.752670 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.752685 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.752708 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.752721 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.806277 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.806308 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.806493 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:48 crc kubenswrapper[4731]: E1129 07:07:48.806632 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:48 crc kubenswrapper[4731]: E1129 07:07:48.806750 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:48 crc kubenswrapper[4731]: E1129 07:07:48.806853 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.855058 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.855099 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.855108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.855123 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.855132 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.958736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.958787 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.958802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.958819 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:48 crc kubenswrapper[4731]: I1129 07:07:48.958834 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:48Z","lastTransitionTime":"2025-11-29T07:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.062523 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.062628 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.062641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.062699 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.062715 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.166049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.166096 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.166106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.166126 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.166138 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.269631 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.269678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.269687 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.269705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.269716 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.373070 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.373127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.373138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.373181 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.373194 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.475303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.475657 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.475740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.475917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.475979 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.579189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.579238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.579249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.579266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.579280 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.682943 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.683032 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.683045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.683063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.683076 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.786127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.786181 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.786193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.786216 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.786236 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.805951 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:49 crc kubenswrapper[4731]: E1129 07:07:49.806144 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.889664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.889710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.889723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.889743 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.889760 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.992604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.992665 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.992678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.992700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:49 crc kubenswrapper[4731]: I1129 07:07:49.992718 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:49Z","lastTransitionTime":"2025-11-29T07:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.095982 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.096058 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.096082 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.096112 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.096131 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.198294 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.198333 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.198342 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.198359 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.198370 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.301291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.301335 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.301345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.301360 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.301371 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.404195 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.404241 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.404252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.404275 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.404288 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.507547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.507617 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.507629 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.507646 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.507657 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.610378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.610447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.610461 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.610490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.610503 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.713810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.713916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.713940 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.713979 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.713997 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.806199 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.806202 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.806199 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:50 crc kubenswrapper[4731]: E1129 07:07:50.806496 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:50 crc kubenswrapper[4731]: E1129 07:07:50.806351 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:50 crc kubenswrapper[4731]: E1129 07:07:50.806558 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.819118 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.819201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.819213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.819232 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.819244 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.921927 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.922016 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.922027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.922045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:50 crc kubenswrapper[4731]: I1129 07:07:50.922056 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:50Z","lastTransitionTime":"2025-11-29T07:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.024824 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.024885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.024903 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.024926 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.024943 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.128643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.128704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.128719 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.128739 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.128752 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.231378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.231541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.231560 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.231620 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.231636 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.334678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.334748 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.334763 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.334788 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.334805 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.437396 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.437760 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.437899 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.438007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.438080 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.540793 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.540834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.540842 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.540859 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.540870 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.644164 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.644616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.644737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.644837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.644977 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.748126 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.748914 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.748996 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.749088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.749178 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.806845 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:51 crc kubenswrapper[4731]: E1129 07:07:51.806994 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.838280 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.838259884 podStartE2EDuration="18.838259884s" podCreationTimestamp="2025-11-29 07:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:51.837381779 +0000 UTC m=+110.727742882" watchObservedRunningTime="2025-11-29 07:07:51.838259884 +0000 UTC m=+110.728620997" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.851958 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.852005 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.852016 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.852034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.852047 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.893093 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.893064943 podStartE2EDuration="1m25.893064943s" podCreationTimestamp="2025-11-29 07:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:51.869458076 +0000 UTC m=+110.759819179" watchObservedRunningTime="2025-11-29 07:07:51.893064943 +0000 UTC m=+110.783426046" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.893299 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.89329534 podStartE2EDuration="56.89329534s" podCreationTimestamp="2025-11-29 07:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:51.892644632 +0000 UTC m=+110.783005755" watchObservedRunningTime="2025-11-29 07:07:51.89329534 +0000 UTC m=+110.783656443" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.945420 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podStartSLOduration=81.945392313 podStartE2EDuration="1m21.945392313s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:51.914417357 +0000 UTC m=+110.804778460" watchObservedRunningTime="2025-11-29 07:07:51.945392313 +0000 UTC m=+110.835753416" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.954202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.954260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.954276 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.954295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.954306 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:51Z","lastTransitionTime":"2025-11-29T07:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.964813 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8tvx8" podStartSLOduration=81.964797302 podStartE2EDuration="1m21.964797302s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:51.964147273 +0000 UTC m=+110.854508376" watchObservedRunningTime="2025-11-29 07:07:51.964797302 +0000 UTC m=+110.855158395" Nov 29 07:07:51 crc kubenswrapper[4731]: I1129 07:07:51.998984 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7d5hj" podStartSLOduration=79.998964018 podStartE2EDuration="1m19.998964018s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:51.980487596 +0000 UTC m=+110.870848719" watchObservedRunningTime="2025-11-29 07:07:51.998964018 +0000 UTC m=+110.889325121" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.056891 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.056950 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.056964 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.056999 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.057012 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.068330 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.068309079 podStartE2EDuration="1m27.068309079s" podCreationTimestamp="2025-11-29 07:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:52.068040251 +0000 UTC m=+110.958401354" watchObservedRunningTime="2025-11-29 07:07:52.068309079 +0000 UTC m=+110.958670182" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.104994 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-n6mtz" podStartSLOduration=82.104967696 podStartE2EDuration="1m22.104967696s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:52.104783541 +0000 UTC m=+110.995144644" watchObservedRunningTime="2025-11-29 07:07:52.104967696 +0000 UTC m=+110.995328799" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.122034 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5rsbt" podStartSLOduration=82.122013088 podStartE2EDuration="1m22.122013088s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:52.121138593 +0000 UTC m=+111.011499696" watchObservedRunningTime="2025-11-29 07:07:52.122013088 +0000 UTC m=+111.012374191" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.159362 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7sc4p" podStartSLOduration=82.159340673 podStartE2EDuration="1m22.159340673s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:52.14400908 +0000 UTC m=+111.034370193" watchObservedRunningTime="2025-11-29 07:07:52.159340673 +0000 UTC m=+111.049701786" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.160212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.160278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.160291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.160311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.160323 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.178766 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=44.178736822 podStartE2EDuration="44.178736822s" podCreationTimestamp="2025-11-29 07:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:52.178134815 +0000 UTC m=+111.068495918" watchObservedRunningTime="2025-11-29 07:07:52.178736822 +0000 UTC m=+111.069097915" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.263418 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.263483 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.263500 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.263526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.263540 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.366534 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.366608 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.366624 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.366643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.366658 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.469042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.469093 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.469104 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.469125 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.469140 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.572681 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.572739 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.572750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.572770 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.572790 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.676376 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.676451 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.676464 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.676484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.676498 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.733448 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:52 crc kubenswrapper[4731]: E1129 07:07:52.733826 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:07:52 crc kubenswrapper[4731]: E1129 07:07:52.733997 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs podName:944440c1-51b2-4c49-b5fd-4c024fc33ace nodeName:}" failed. No retries permitted until 2025-11-29 07:08:56.733962214 +0000 UTC m=+175.624323517 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs") pod "network-metrics-daemon-2pp9l" (UID: "944440c1-51b2-4c49-b5fd-4c024fc33ace") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.779947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.780035 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.780062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.780097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.780125 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.806106 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.806170 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.806256 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:52 crc kubenswrapper[4731]: E1129 07:07:52.806289 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:52 crc kubenswrapper[4731]: E1129 07:07:52.806459 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:52 crc kubenswrapper[4731]: E1129 07:07:52.806551 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.883655 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.883705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.883718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.883739 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.883753 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.986972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.987027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.987038 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.987052 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:52 crc kubenswrapper[4731]: I1129 07:07:52.987096 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:52Z","lastTransitionTime":"2025-11-29T07:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.090063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.090107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.090117 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.090131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.090141 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.192968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.193011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.193020 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.193040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.193053 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.295061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.295098 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.295107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.295122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.295132 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.397973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.398029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.398042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.398061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.398072 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.501223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.501300 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.501314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.501339 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.501354 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.605032 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.605102 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.605116 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.605138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.605151 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.708527 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.708610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.708631 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.708651 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.708662 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.806766 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:53 crc kubenswrapper[4731]: E1129 07:07:53.807771 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.811855 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.811898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.811909 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.811929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.811941 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.914654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.914718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.914731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.914750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:53 crc kubenswrapper[4731]: I1129 07:07:53.914765 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:53Z","lastTransitionTime":"2025-11-29T07:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.017632 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.017690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.017704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.017725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.017739 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:54Z","lastTransitionTime":"2025-11-29T07:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.121027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.121420 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.121477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.121506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.121518 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:54Z","lastTransitionTime":"2025-11-29T07:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.224950 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.225000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.225015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.225034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.225051 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:54Z","lastTransitionTime":"2025-11-29T07:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.328383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.328440 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.328452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.328475 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.328492 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:54Z","lastTransitionTime":"2025-11-29T07:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.431774 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.431863 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.431876 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.431896 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.431913 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:54Z","lastTransitionTime":"2025-11-29T07:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.534916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.534983 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.535000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.535029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.535047 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:54Z","lastTransitionTime":"2025-11-29T07:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.567245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.567307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.567324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.567345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.567360 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:07:54Z","lastTransitionTime":"2025-11-29T07:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.618117 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw"] Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.618627 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.620928 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.621320 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.621630 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.624086 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.779089 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.779150 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.779226 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.779258 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.779290 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.806374 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.806449 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:54 crc kubenswrapper[4731]: E1129 07:07:54.806518 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:54 crc kubenswrapper[4731]: E1129 07:07:54.806590 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.807004 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:54 crc kubenswrapper[4731]: E1129 07:07:54.807239 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.880148 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.880223 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.880266 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.880276 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.880316 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.880327 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.880361 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.881248 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.888786 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.901468 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9876d13-ce78-402b-b4fd-8efe1c29dbe8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sp9vw\" (UID: \"d9876d13-ce78-402b-b4fd-8efe1c29dbe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:54 crc kubenswrapper[4731]: I1129 07:07:54.978927 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" Nov 29 07:07:55 crc kubenswrapper[4731]: W1129 07:07:55.005431 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9876d13_ce78_402b_b4fd_8efe1c29dbe8.slice/crio-7c0a083367f5f739e7f245e204d564539c7156ec1ffdb89224f064183d5fe293 WatchSource:0}: Error finding container 7c0a083367f5f739e7f245e204d564539c7156ec1ffdb89224f064183d5fe293: Status 404 returned error can't find the container with id 7c0a083367f5f739e7f245e204d564539c7156ec1ffdb89224f064183d5fe293 Nov 29 07:07:55 crc kubenswrapper[4731]: I1129 07:07:55.477811 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" event={"ID":"d9876d13-ce78-402b-b4fd-8efe1c29dbe8","Type":"ContainerStarted","Data":"fbc62d7052dd858e0c95f07484a75d463655b00bd5066988f9505e69db91ecfc"} Nov 29 07:07:55 crc kubenswrapper[4731]: I1129 07:07:55.477878 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" event={"ID":"d9876d13-ce78-402b-b4fd-8efe1c29dbe8","Type":"ContainerStarted","Data":"7c0a083367f5f739e7f245e204d564539c7156ec1ffdb89224f064183d5fe293"} Nov 29 07:07:55 crc kubenswrapper[4731]: I1129 07:07:55.806889 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:55 crc kubenswrapper[4731]: E1129 07:07:55.807070 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:56 crc kubenswrapper[4731]: I1129 07:07:56.805752 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:56 crc kubenswrapper[4731]: I1129 07:07:56.805794 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:56 crc kubenswrapper[4731]: I1129 07:07:56.805774 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:56 crc kubenswrapper[4731]: E1129 07:07:56.806003 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:56 crc kubenswrapper[4731]: E1129 07:07:56.806081 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:56 crc kubenswrapper[4731]: E1129 07:07:56.806183 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:56 crc kubenswrapper[4731]: I1129 07:07:56.806808 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:07:56 crc kubenswrapper[4731]: E1129 07:07:56.806954 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:07:57 crc kubenswrapper[4731]: I1129 07:07:57.806416 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:57 crc kubenswrapper[4731]: E1129 07:07:57.806729 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:07:58 crc kubenswrapper[4731]: I1129 07:07:58.806607 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:07:58 crc kubenswrapper[4731]: I1129 07:07:58.806610 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:07:58 crc kubenswrapper[4731]: I1129 07:07:58.806754 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:07:58 crc kubenswrapper[4731]: E1129 07:07:58.807014 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:07:58 crc kubenswrapper[4731]: E1129 07:07:58.807411 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:07:58 crc kubenswrapper[4731]: E1129 07:07:58.807136 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:07:59 crc kubenswrapper[4731]: I1129 07:07:59.806987 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:07:59 crc kubenswrapper[4731]: E1129 07:07:59.807170 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:00 crc kubenswrapper[4731]: I1129 07:08:00.806262 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:00 crc kubenswrapper[4731]: E1129 07:08:00.806396 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:00 crc kubenswrapper[4731]: I1129 07:08:00.806280 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:00 crc kubenswrapper[4731]: E1129 07:08:00.806473 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:00 crc kubenswrapper[4731]: I1129 07:08:00.806267 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:00 crc kubenswrapper[4731]: E1129 07:08:00.806540 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:01 crc kubenswrapper[4731]: E1129 07:08:01.754667 4731 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 29 07:08:01 crc kubenswrapper[4731]: I1129 07:08:01.805899 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:01 crc kubenswrapper[4731]: E1129 07:08:01.806925 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:01 crc kubenswrapper[4731]: E1129 07:08:01.944823 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:08:02 crc kubenswrapper[4731]: I1129 07:08:02.806503 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:02 crc kubenswrapper[4731]: I1129 07:08:02.806552 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:02 crc kubenswrapper[4731]: E1129 07:08:02.806675 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:02 crc kubenswrapper[4731]: E1129 07:08:02.806808 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:02 crc kubenswrapper[4731]: I1129 07:08:02.806608 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:02 crc kubenswrapper[4731]: E1129 07:08:02.806923 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:03 crc kubenswrapper[4731]: I1129 07:08:03.806930 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:03 crc kubenswrapper[4731]: E1129 07:08:03.807229 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:04 crc kubenswrapper[4731]: I1129 07:08:04.806866 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:04 crc kubenswrapper[4731]: E1129 07:08:04.807039 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:04 crc kubenswrapper[4731]: I1129 07:08:04.807260 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:04 crc kubenswrapper[4731]: E1129 07:08:04.807319 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:04 crc kubenswrapper[4731]: I1129 07:08:04.807452 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:04 crc kubenswrapper[4731]: E1129 07:08:04.807525 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:05 crc kubenswrapper[4731]: I1129 07:08:05.807574 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:05 crc kubenswrapper[4731]: E1129 07:08:05.807804 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:06 crc kubenswrapper[4731]: I1129 07:08:06.806182 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:06 crc kubenswrapper[4731]: I1129 07:08:06.806818 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:06 crc kubenswrapper[4731]: I1129 07:08:06.806879 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:06 crc kubenswrapper[4731]: E1129 07:08:06.806957 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:06 crc kubenswrapper[4731]: E1129 07:08:06.807053 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:06 crc kubenswrapper[4731]: E1129 07:08:06.807560 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:06 crc kubenswrapper[4731]: E1129 07:08:06.945739 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:08:07 crc kubenswrapper[4731]: I1129 07:08:07.806860 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:07 crc kubenswrapper[4731]: E1129 07:08:07.807856 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:08 crc kubenswrapper[4731]: I1129 07:08:08.806400 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:08 crc kubenswrapper[4731]: I1129 07:08:08.806464 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:08 crc kubenswrapper[4731]: I1129 07:08:08.806838 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:08 crc kubenswrapper[4731]: E1129 07:08:08.807010 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:08 crc kubenswrapper[4731]: E1129 07:08:08.807185 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:08 crc kubenswrapper[4731]: E1129 07:08:08.807298 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:09 crc kubenswrapper[4731]: I1129 07:08:09.806157 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:09 crc kubenswrapper[4731]: E1129 07:08:09.806430 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:10 crc kubenswrapper[4731]: I1129 07:08:10.806430 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:10 crc kubenswrapper[4731]: E1129 07:08:10.806717 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:10 crc kubenswrapper[4731]: I1129 07:08:10.806734 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:10 crc kubenswrapper[4731]: E1129 07:08:10.806871 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:10 crc kubenswrapper[4731]: I1129 07:08:10.807670 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:10 crc kubenswrapper[4731]: E1129 07:08:10.807774 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:10 crc kubenswrapper[4731]: I1129 07:08:10.808018 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:08:10 crc kubenswrapper[4731]: E1129 07:08:10.808534 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4t5j_openshift-ovn-kubernetes(7d4585c4-ac4a-4268-b25e-47509c17cfe2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" Nov 29 07:08:11 crc kubenswrapper[4731]: I1129 07:08:11.546326 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/1.log" Nov 29 07:08:11 crc kubenswrapper[4731]: I1129 07:08:11.546813 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/0.log" Nov 29 07:08:11 crc kubenswrapper[4731]: I1129 07:08:11.546963 4731 generic.go:334] "Generic (PLEG): container finished" podID="5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8" containerID="bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86" exitCode=1 Nov 29 07:08:11 crc kubenswrapper[4731]: I1129 07:08:11.547078 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5rsbt" event={"ID":"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8","Type":"ContainerDied","Data":"bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86"} Nov 29 07:08:11 crc kubenswrapper[4731]: I1129 07:08:11.547161 4731 scope.go:117] "RemoveContainer" containerID="4026306c62b322aa02e08b8ebd6f9d5d1eaa75e5026c9b1fc4c8d3c1ff77e2a5" Nov 29 07:08:11 crc kubenswrapper[4731]: I1129 07:08:11.547927 4731 scope.go:117] "RemoveContainer" containerID="bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86" Nov 29 07:08:11 crc kubenswrapper[4731]: E1129 07:08:11.548183 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5rsbt_openshift-multus(5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8)\"" pod="openshift-multus/multus-5rsbt" podUID="5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8" Nov 29 07:08:11 crc kubenswrapper[4731]: I1129 07:08:11.566475 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sp9vw" podStartSLOduration=101.566413329 podStartE2EDuration="1m41.566413329s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:07:55.498328178 +0000 UTC m=+114.388689281" watchObservedRunningTime="2025-11-29 07:08:11.566413329 +0000 UTC m=+130.456774432" Nov 29 07:08:11 crc kubenswrapper[4731]: I1129 07:08:11.806165 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:11 crc kubenswrapper[4731]: E1129 07:08:11.806363 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:11 crc kubenswrapper[4731]: E1129 07:08:11.946392 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:08:12 crc kubenswrapper[4731]: I1129 07:08:12.551903 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/1.log" Nov 29 07:08:12 crc kubenswrapper[4731]: I1129 07:08:12.805970 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:12 crc kubenswrapper[4731]: I1129 07:08:12.806222 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:12 crc kubenswrapper[4731]: I1129 07:08:12.806290 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:12 crc kubenswrapper[4731]: E1129 07:08:12.806358 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:12 crc kubenswrapper[4731]: E1129 07:08:12.806748 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:12 crc kubenswrapper[4731]: E1129 07:08:12.806886 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:13 crc kubenswrapper[4731]: I1129 07:08:13.806905 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:13 crc kubenswrapper[4731]: E1129 07:08:13.807067 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:14 crc kubenswrapper[4731]: I1129 07:08:14.806281 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:14 crc kubenswrapper[4731]: I1129 07:08:14.806403 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:14 crc kubenswrapper[4731]: I1129 07:08:14.806536 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:14 crc kubenswrapper[4731]: E1129 07:08:14.806433 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:14 crc kubenswrapper[4731]: E1129 07:08:14.806601 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:14 crc kubenswrapper[4731]: E1129 07:08:14.806806 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:15 crc kubenswrapper[4731]: I1129 07:08:15.806200 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:15 crc kubenswrapper[4731]: E1129 07:08:15.806378 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:16 crc kubenswrapper[4731]: I1129 07:08:16.806492 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:16 crc kubenswrapper[4731]: I1129 07:08:16.806492 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:16 crc kubenswrapper[4731]: I1129 07:08:16.806512 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:16 crc kubenswrapper[4731]: E1129 07:08:16.806683 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:16 crc kubenswrapper[4731]: E1129 07:08:16.806801 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:16 crc kubenswrapper[4731]: E1129 07:08:16.806879 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:16 crc kubenswrapper[4731]: E1129 07:08:16.948023 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:08:17 crc kubenswrapper[4731]: I1129 07:08:17.806193 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:17 crc kubenswrapper[4731]: E1129 07:08:17.806369 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:18 crc kubenswrapper[4731]: I1129 07:08:18.806643 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:18 crc kubenswrapper[4731]: I1129 07:08:18.806787 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:18 crc kubenswrapper[4731]: I1129 07:08:18.806993 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:18 crc kubenswrapper[4731]: E1129 07:08:18.806986 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:18 crc kubenswrapper[4731]: E1129 07:08:18.807131 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:18 crc kubenswrapper[4731]: E1129 07:08:18.807245 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:19 crc kubenswrapper[4731]: I1129 07:08:19.806914 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:19 crc kubenswrapper[4731]: E1129 07:08:19.807068 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:20 crc kubenswrapper[4731]: I1129 07:08:20.806596 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:20 crc kubenswrapper[4731]: I1129 07:08:20.806642 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:20 crc kubenswrapper[4731]: I1129 07:08:20.806671 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:20 crc kubenswrapper[4731]: E1129 07:08:20.806817 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:20 crc kubenswrapper[4731]: E1129 07:08:20.807014 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:20 crc kubenswrapper[4731]: E1129 07:08:20.807256 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:21 crc kubenswrapper[4731]: I1129 07:08:21.806824 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:21 crc kubenswrapper[4731]: E1129 07:08:21.807778 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:21 crc kubenswrapper[4731]: E1129 07:08:21.948724 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:08:22 crc kubenswrapper[4731]: I1129 07:08:22.805888 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:22 crc kubenswrapper[4731]: I1129 07:08:22.805918 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:22 crc kubenswrapper[4731]: I1129 07:08:22.805923 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:22 crc kubenswrapper[4731]: E1129 07:08:22.806372 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:22 crc kubenswrapper[4731]: E1129 07:08:22.806588 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:22 crc kubenswrapper[4731]: E1129 07:08:22.806739 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:22 crc kubenswrapper[4731]: I1129 07:08:22.807242 4731 scope.go:117] "RemoveContainer" containerID="bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86" Nov 29 07:08:23 crc kubenswrapper[4731]: I1129 07:08:23.595334 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/1.log" Nov 29 07:08:23 crc kubenswrapper[4731]: I1129 07:08:23.595810 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5rsbt" event={"ID":"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8","Type":"ContainerStarted","Data":"7a94cd2b3571722a673cd8b315be00d962733b4fdc954fffd6cb25b7c577b0c4"} Nov 29 07:08:23 crc kubenswrapper[4731]: I1129 07:08:23.806674 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:23 crc kubenswrapper[4731]: E1129 07:08:23.806880 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:24 crc kubenswrapper[4731]: I1129 07:08:24.806023 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:24 crc kubenswrapper[4731]: I1129 07:08:24.806144 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:24 crc kubenswrapper[4731]: E1129 07:08:24.806178 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:24 crc kubenswrapper[4731]: I1129 07:08:24.806283 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:24 crc kubenswrapper[4731]: E1129 07:08:24.806748 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:24 crc kubenswrapper[4731]: E1129 07:08:24.806902 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:24 crc kubenswrapper[4731]: I1129 07:08:24.807122 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:08:25 crc kubenswrapper[4731]: I1129 07:08:25.606958 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/3.log" Nov 29 07:08:25 crc kubenswrapper[4731]: I1129 07:08:25.610465 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerStarted","Data":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} Nov 29 07:08:25 crc kubenswrapper[4731]: I1129 07:08:25.611610 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:08:25 crc kubenswrapper[4731]: I1129 07:08:25.641328 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2pp9l"] Nov 29 07:08:25 crc kubenswrapper[4731]: I1129 07:08:25.641492 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:25 crc kubenswrapper[4731]: E1129 07:08:25.641618 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:25 crc kubenswrapper[4731]: I1129 07:08:25.642023 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podStartSLOduration=114.641999628 podStartE2EDuration="1m54.641999628s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:25.641258658 +0000 UTC m=+144.531619761" watchObservedRunningTime="2025-11-29 07:08:25.641999628 +0000 UTC m=+144.532360731" Nov 29 07:08:25 crc kubenswrapper[4731]: I1129 07:08:25.806895 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:25 crc kubenswrapper[4731]: E1129 07:08:25.807672 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:26 crc kubenswrapper[4731]: I1129 07:08:26.806058 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:26 crc kubenswrapper[4731]: I1129 07:08:26.806058 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:26 crc kubenswrapper[4731]: I1129 07:08:26.806058 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:26 crc kubenswrapper[4731]: E1129 07:08:26.806379 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:26 crc kubenswrapper[4731]: E1129 07:08:26.806222 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:26 crc kubenswrapper[4731]: E1129 07:08:26.806426 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:26 crc kubenswrapper[4731]: E1129 07:08:26.950161 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:08:27 crc kubenswrapper[4731]: I1129 07:08:27.806602 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:27 crc kubenswrapper[4731]: E1129 07:08:27.806789 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:28 crc kubenswrapper[4731]: I1129 07:08:28.806298 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:28 crc kubenswrapper[4731]: I1129 07:08:28.806298 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:28 crc kubenswrapper[4731]: I1129 07:08:28.806325 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:28 crc kubenswrapper[4731]: E1129 07:08:28.806455 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:28 crc kubenswrapper[4731]: E1129 07:08:28.806744 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:28 crc kubenswrapper[4731]: E1129 07:08:28.806849 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:29 crc kubenswrapper[4731]: I1129 07:08:29.806623 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:29 crc kubenswrapper[4731]: E1129 07:08:29.806780 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:30 crc kubenswrapper[4731]: I1129 07:08:30.806181 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:30 crc kubenswrapper[4731]: I1129 07:08:30.806266 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:30 crc kubenswrapper[4731]: E1129 07:08:30.806371 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2pp9l" podUID="944440c1-51b2-4c49-b5fd-4c024fc33ace" Nov 29 07:08:30 crc kubenswrapper[4731]: I1129 07:08:30.806196 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:30 crc kubenswrapper[4731]: E1129 07:08:30.806510 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:08:30 crc kubenswrapper[4731]: E1129 07:08:30.806648 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:08:31 crc kubenswrapper[4731]: I1129 07:08:31.805823 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:31 crc kubenswrapper[4731]: E1129 07:08:31.807019 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:08:32 crc kubenswrapper[4731]: I1129 07:08:32.806761 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:32 crc kubenswrapper[4731]: I1129 07:08:32.806880 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:32 crc kubenswrapper[4731]: I1129 07:08:32.806761 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:32 crc kubenswrapper[4731]: I1129 07:08:32.809368 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:08:32 crc kubenswrapper[4731]: I1129 07:08:32.809368 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:08:32 crc kubenswrapper[4731]: I1129 07:08:32.810037 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 07:08:32 crc kubenswrapper[4731]: I1129 07:08:32.810045 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 07:08:33 crc kubenswrapper[4731]: I1129 07:08:33.003221 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:08:33 crc kubenswrapper[4731]: I1129 07:08:33.003315 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:08:33 crc kubenswrapper[4731]: I1129 07:08:33.348277 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:08:33 crc kubenswrapper[4731]: I1129 07:08:33.739125 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:33 crc kubenswrapper[4731]: E1129 07:08:33.739402 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:10:35.739349129 +0000 UTC m=+274.629710242 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:33 crc kubenswrapper[4731]: I1129 07:08:33.807738 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:33 crc kubenswrapper[4731]: I1129 07:08:33.813655 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:08:33 crc kubenswrapper[4731]: I1129 07:08:33.813780 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.246191 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.246319 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.247808 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.257317 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.347607 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.348135 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.352370 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.354483 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.426910 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.632150 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.640849 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:08:34 crc kubenswrapper[4731]: I1129 07:08:34.642784 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"90b73926bd643413a36701c5cff4600ec0be310d00d35a1b50017c14eb71fef5"} Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.404801 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.448106 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4scbk"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.448615 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.450526 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-htrhs"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.451225 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.451737 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.451863 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.452045 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.452132 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.452223 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.452869 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.453599 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.455282 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.455600 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.455607 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.455693 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.460409 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hs7k4"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.460973 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.461330 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.461965 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.462256 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s72t6"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.462551 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.463799 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.463799 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.464089 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.464237 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.464618 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.465703 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.467227 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bc855"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.467457 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.468644 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h"] Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.468742 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.469604 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.472754 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.473867 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.474310 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.478112 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.478190 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.478235 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.478275 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.478392 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.478106 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:08:35 crc kubenswrapper[4731]: I1129 07:08:35.478550 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.955062 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.956320 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.956867 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.957174 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.957481 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.957685 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.957843 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.958053 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.958208 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.958343 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.958618 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.959227 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.959595 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.959796 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.960127 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.960497 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.960663 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.960817 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.960882 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.961033 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.961394 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.961660 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.962345 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.960822 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 07:08:36 crc kubenswrapper[4731]: I1129 07:08:36.999192 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.001544 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.004459 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.005184 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.005358 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.005647 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.006069 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.007720 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.010217 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.035442 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.037168 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.048334 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.057328 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-client-ca\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.057903 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0123bcb6-853a-4329-bceb-87a77cd34b27-audit-dir\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.057934 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-console-config\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.057971 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/85473ad4-f055-4531-a19e-30697cd51568-machine-approver-tls\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.057995 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-oauth-config\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058013 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwfkt\" (UniqueName: \"kubernetes.io/projected/55949699-24bb-4705-8bf0-db1dd651d387-kube-api-access-fwfkt\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058049 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wv5\" (UniqueName: \"kubernetes.io/projected/85473ad4-f055-4531-a19e-30697cd51568-kube-api-access-w6wv5\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058072 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-serving-cert\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058146 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e474597-96ba-424e-967d-48c16424ef23-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058188 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qns9t\" (UniqueName: \"kubernetes.io/projected/0123bcb6-853a-4329-bceb-87a77cd34b27-kube-api-access-qns9t\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058211 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhdwc\" (UniqueName: \"kubernetes.io/projected/4fa61345-b935-4924-a05b-58a9ec104f07-kube-api-access-zhdwc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058375 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlzdl\" (UniqueName: \"kubernetes.io/projected/e80651be-fbdb-464e-876a-c090e2fa0475-kube-api-access-vlzdl\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058444 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058490 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-audit-policies\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058517 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa61345-b935-4924-a05b-58a9ec104f07-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058545 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa040abb-6524-4abd-834f-18b72a623d16-serving-cert\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058589 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzm9\" (UniqueName: \"kubernetes.io/projected/aa040abb-6524-4abd-834f-18b72a623d16-kube-api-access-2dzm9\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058619 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-config\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058639 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-etcd-client\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058661 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-serving-cert\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058683 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-trusted-ca-bundle\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058708 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-oauth-serving-cert\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058733 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e474597-96ba-424e-967d-48c16424ef23-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058773 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058815 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80651be-fbdb-464e-876a-c090e2fa0475-serving-cert\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058840 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vghv\" (UniqueName: \"kubernetes.io/projected/6e474597-96ba-424e-967d-48c16424ef23-kube-api-access-2vghv\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058867 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-encryption-config\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058911 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80651be-fbdb-464e-876a-c090e2fa0475-config\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058947 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-service-ca\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058973 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80651be-fbdb-464e-876a-c090e2fa0475-trusted-ca\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.058995 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85473ad4-f055-4531-a19e-30697cd51568-auth-proxy-config\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.059032 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa61345-b935-4924-a05b-58a9ec104f07-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.059058 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85473ad4-f055-4531-a19e-30697cd51568-config\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.072125 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qg27s"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.073997 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.074510 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.074924 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f180753e44ff4fb5f3a4fc2611a11e17f4e82f6977783c57356a94f7c9ae23d8"} Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.075015 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-c6kf9"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.075390 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xstx4"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.075867 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m7s4c"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.076342 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.076536 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.076700 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.074148 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.077071 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qg27s"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.077151 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.077234 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-c6kf9"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.077329 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-htrhs"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.077440 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.077726 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-c6kf9" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.084453 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.085241 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4scbk"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.085326 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s72t6"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.085363 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.085645 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.086885 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.087207 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.087549 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.087861 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.088111 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.088837 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bc42ef5664f473c8e519c587818f724681d65356da08140cb17cfc94bc773f38"} Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.089846 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.089976 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.090240 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.090415 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.094313 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.094379 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hs7k4"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.096467 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.098306 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.100001 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.100077 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.102272 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.102593 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.103284 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.103466 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.103699 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.104191 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.104674 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.104792 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.120124 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.128934 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m7s4c"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.131017 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.131410 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.133503 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135779 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xstx4"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.134707 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.134834 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.134873 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.134958 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135007 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135031 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135100 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135102 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135168 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135189 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135242 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.137756 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135312 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135394 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135456 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135546 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135666 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.135739 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.138735 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.140825 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.141501 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.143885 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.145094 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l8mm6"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.146435 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.147787 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n5tn2"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.148681 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.151502 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.152889 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.153176 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.155522 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-cn2xp"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.158208 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8nrfn"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.158834 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.160113 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.160175 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.160671 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.166008 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.166510 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.166845 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-client-ca\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.166975 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv9sj\" (UniqueName: \"kubernetes.io/projected/0ca53b84-140e-4fbf-b822-03a1c73d04aa-kube-api-access-pv9sj\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167078 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-client\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167317 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa61345-b935-4924-a05b-58a9ec104f07-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167431 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85473ad4-f055-4531-a19e-30697cd51568-config\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167531 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-client-ca\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167727 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ca53b84-140e-4fbf-b822-03a1c73d04aa-metrics-tls\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167908 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0123bcb6-853a-4329-bceb-87a77cd34b27-audit-dir\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.168029 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-console-config\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.168136 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-config\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.168241 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwfkt\" (UniqueName: \"kubernetes.io/projected/55949699-24bb-4705-8bf0-db1dd651d387-kube-api-access-fwfkt\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.168372 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/85473ad4-f055-4531-a19e-30697cd51568-machine-approver-tls\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.168495 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-oauth-config\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.168667 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wv5\" (UniqueName: \"kubernetes.io/projected/85473ad4-f055-4531-a19e-30697cd51568-kube-api-access-w6wv5\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169332 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169387 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-serving-cert\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169419 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e474597-96ba-424e-967d-48c16424ef23-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169443 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qns9t\" (UniqueName: \"kubernetes.io/projected/0123bcb6-853a-4329-bceb-87a77cd34b27-kube-api-access-qns9t\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169473 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhdwc\" (UniqueName: \"kubernetes.io/projected/4fa61345-b935-4924-a05b-58a9ec104f07-kube-api-access-zhdwc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169500 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da2fff0-6264-4369-9c21-d322fa65c6b0-serving-cert\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlzdl\" (UniqueName: \"kubernetes.io/projected/e80651be-fbdb-464e-876a-c090e2fa0475-kube-api-access-vlzdl\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169587 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b99f2\" (UniqueName: \"kubernetes.io/projected/5f3c7091-33a8-4be0-bb55-63300514c205-kube-api-access-b99f2\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169619 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169647 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-audit-policies\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169671 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa61345-b935-4924-a05b-58a9ec104f07-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169697 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa040abb-6524-4abd-834f-18b72a623d16-serving-cert\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169749 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzm9\" (UniqueName: \"kubernetes.io/projected/aa040abb-6524-4abd-834f-18b72a623d16-kube-api-access-2dzm9\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169775 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-config\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169827 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/144f3608-d338-4452-8bd9-a5fa47914090-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169858 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-config\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169884 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmkvx\" (UniqueName: \"kubernetes.io/projected/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-kube-api-access-hmkvx\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169906 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-etcd-client\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169932 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-serving-cert\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169957 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-trusted-ca-bundle\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169976 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-oauth-serving-cert\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169997 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e474597-96ba-424e-967d-48c16424ef23-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170017 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85473ad4-f055-4531-a19e-30697cd51568-config\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170024 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170111 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170144 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-ca\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170189 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/144f3608-d338-4452-8bd9-a5fa47914090-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170213 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/144f3608-d338-4452-8bd9-a5fa47914090-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170251 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80651be-fbdb-464e-876a-c090e2fa0475-serving-cert\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170276 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vghv\" (UniqueName: \"kubernetes.io/projected/6e474597-96ba-424e-967d-48c16424ef23-kube-api-access-2vghv\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170302 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-encryption-config\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170327 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170352 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ssp5\" (UniqueName: \"kubernetes.io/projected/144f3608-d338-4452-8bd9-a5fa47914090-kube-api-access-5ssp5\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170385 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-serving-cert\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170416 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-serving-cert\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170446 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-service-ca-bundle\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170453 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-console-config\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170475 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ca53b84-140e-4fbf-b822-03a1c73d04aa-config-volume\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.170898 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-client-ca\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.171286 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.168870 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0123bcb6-853a-4329-bceb-87a77cd34b27-audit-dir\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169019 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.166921 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.172691 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa61345-b935-4924-a05b-58a9ec104f07-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173593 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80651be-fbdb-464e-876a-c090e2fa0475-config\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173672 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-service-ca\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173705 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f3c7091-33a8-4be0-bb55-63300514c205-serving-cert\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173736 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm98c\" (UniqueName: \"kubernetes.io/projected/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-kube-api-access-vm98c\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173765 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-config\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173796 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ltrf\" (UniqueName: \"kubernetes.io/projected/5da2fff0-6264-4369-9c21-d322fa65c6b0-kube-api-access-2ltrf\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173821 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-service-ca\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173837 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80651be-fbdb-464e-876a-c090e2fa0475-trusted-ca\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.173857 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85473ad4-f055-4531-a19e-30697cd51568-auth-proxy-config\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.174614 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-config\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.175034 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.175355 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e474597-96ba-424e-967d-48c16424ef23-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.175767 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-trusted-ca-bundle\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.176152 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.176523 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-oauth-serving-cert\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.177312 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80651be-fbdb-464e-876a-c090e2fa0475-config\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.177373 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.177778 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85473ad4-f055-4531-a19e-30697cd51568-auth-proxy-config\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.177922 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167093 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.178516 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167157 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167213 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167410 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167460 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.178930 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.167517 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.169050 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.175797 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.176071 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.179309 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0123bcb6-853a-4329-bceb-87a77cd34b27-audit-policies\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.180504 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.182582 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80651be-fbdb-464e-876a-c090e2fa0475-trusted-ca\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.180781 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.180866 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.182342 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e474597-96ba-424e-967d-48c16424ef23-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.182795 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-oauth-config\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.183203 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-serving-cert\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.184438 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.185286 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.185772 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wrp7q"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.187339 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.188414 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2qd7z"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.189960 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.191233 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.192771 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.194652 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.194987 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.195881 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.196090 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.196487 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.196831 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.197429 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.199026 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.199649 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.199926 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.200139 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.200500 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n5tn2"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.200522 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.200681 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.200934 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.200980 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.201261 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.201407 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.201657 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.201687 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l8mm6"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.201416 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.202222 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa040abb-6524-4abd-834f-18b72a623d16-serving-cert\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.202493 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.202658 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.202773 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.203061 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.203362 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.203613 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.203837 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.204054 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.204206 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.204334 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.204414 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.204375 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.204556 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.204629 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.205005 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.205044 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.205210 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.205237 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-etcd-client\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.204976 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qns9t\" (UniqueName: \"kubernetes.io/projected/0123bcb6-853a-4329-bceb-87a77cd34b27-kube-api-access-qns9t\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.205828 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.205854 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.206005 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.206062 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t7s8k"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.206937 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.206967 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.208074 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.208808 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80651be-fbdb-464e-876a-c090e2fa0475-serving-cert\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.208917 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-service-ca\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.209081 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5tpdm"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.209483 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.209974 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.210218 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-s2twr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.211389 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s2twr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.211735 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwfkt\" (UniqueName: \"kubernetes.io/projected/55949699-24bb-4705-8bf0-db1dd651d387-kube-api-access-fwfkt\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.211814 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.212871 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.213283 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2qgzh"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.213754 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.213899 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vghv\" (UniqueName: \"kubernetes.io/projected/6e474597-96ba-424e-967d-48c16424ef23-kube-api-access-2vghv\") pod \"openshift-apiserver-operator-796bbdcf4f-bfdjj\" (UID: \"6e474597-96ba-424e-967d-48c16424ef23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.214631 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.214867 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhdwc\" (UniqueName: \"kubernetes.io/projected/4fa61345-b935-4924-a05b-58a9ec104f07-kube-api-access-zhdwc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.215196 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlzdl\" (UniqueName: \"kubernetes.io/projected/e80651be-fbdb-464e-876a-c090e2fa0475-kube-api-access-vlzdl\") pod \"console-operator-58897d9998-hs7k4\" (UID: \"e80651be-fbdb-464e-876a-c090e2fa0475\") " pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.214636 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.215865 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.216779 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0123bcb6-853a-4329-bceb-87a77cd34b27-encryption-config\") pod \"apiserver-7bbb656c7d-p7p7h\" (UID: \"0123bcb6-853a-4329-bceb-87a77cd34b27\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.216845 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa61345-b935-4924-a05b-58a9ec104f07-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dcvjn\" (UID: \"4fa61345-b935-4924-a05b-58a9ec104f07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.219373 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzm9\" (UniqueName: \"kubernetes.io/projected/aa040abb-6524-4abd-834f-18b72a623d16-kube-api-access-2dzm9\") pod \"route-controller-manager-6576b87f9c-kwjhb\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.220431 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/85473ad4-f055-4531-a19e-30697cd51568-machine-approver-tls\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.226626 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-serving-cert\") pod \"console-f9d7485db-htrhs\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.230769 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wv5\" (UniqueName: \"kubernetes.io/projected/85473ad4-f055-4531-a19e-30697cd51568-kube-api-access-w6wv5\") pod \"machine-approver-56656f9798-bc855\" (UID: \"85473ad4-f055-4531-a19e-30697cd51568\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.230773 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nn8lp"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.237190 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.237356 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.238277 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.240869 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.241051 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.242946 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8nrfn"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.245054 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.246168 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.249124 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.252371 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.252982 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wrp7q"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.253036 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.258581 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.260396 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s2twr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.261792 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.264179 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.266658 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t7s8k"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.266932 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.268102 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.270311 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.271618 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.272110 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.274239 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.274852 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cn2xp"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.274950 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275026 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctvdn\" (UniqueName: \"kubernetes.io/projected/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-kube-api-access-ctvdn\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275075 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da2fff0-6264-4369-9c21-d322fa65c6b0-serving-cert\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275107 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b99f2\" (UniqueName: \"kubernetes.io/projected/5f3c7091-33a8-4be0-bb55-63300514c205-kube-api-access-b99f2\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275141 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275173 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-audit-dir\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275221 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf2cdf59-237b-432e-9e41-c37078755275-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275251 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e911d5c-fa21-47e2-9ab8-12f919978585-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdch2\" (UID: \"5e911d5c-fa21-47e2-9ab8-12f919978585\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275276 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-etcd-client\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276263 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-encryption-config\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276304 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276334 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x8wg\" (UniqueName: \"kubernetes.io/projected/d639491c-0fbd-44a6-b273-37dcc1e5681d-kube-api-access-6x8wg\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276363 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-dir\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276411 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf5w7\" (UniqueName: \"kubernetes.io/projected/5e911d5c-fa21-47e2-9ab8-12f919978585-kube-api-access-lf5w7\") pod \"cluster-samples-operator-665b6dd947-qdch2\" (UID: \"5e911d5c-fa21-47e2-9ab8-12f919978585\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276641 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/144f3608-d338-4452-8bd9-a5fa47914090-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276678 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjbw6\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-kube-api-access-qjbw6\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276706 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-node-pullsecrets\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276742 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-config\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.275903 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276804 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmkvx\" (UniqueName: \"kubernetes.io/projected/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-kube-api-access-hmkvx\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276837 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ths8\" (UniqueName: \"kubernetes.io/projected/62d5acae-8dde-41c0-bbf4-66d294b8b64b-kube-api-access-9ths8\") pod \"dns-operator-744455d44c-n5tn2\" (UID: \"62d5acae-8dde-41c0-bbf4-66d294b8b64b\") " pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.276913 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.277163 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.277202 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf2cdf59-237b-432e-9e41-c37078755275-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.277708 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.278039 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-config\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.278282 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-ca\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.278333 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec651e57-2be1-4076-93f5-bcfa036b4624-config\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.278404 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec651e57-2be1-4076-93f5-bcfa036b4624-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.278438 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-etcd-serving-ca\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.278490 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/144f3608-d338-4452-8bd9-a5fa47914090-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.278516 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/144f3608-d338-4452-8bd9-a5fa47914090-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.278916 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279024 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-registry-tls\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279090 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279120 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-registry-certificates\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279295 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-ca\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279302 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec651e57-2be1-4076-93f5-bcfa036b4624-images\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279384 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279410 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ssp5\" (UniqueName: \"kubernetes.io/projected/144f3608-d338-4452-8bd9-a5fa47914090-kube-api-access-5ssp5\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279438 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-serving-cert\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279459 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279488 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-serving-cert\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279514 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-service-ca-bundle\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279542 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ca53b84-140e-4fbf-b822-03a1c73d04aa-config-volume\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279590 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-trusted-ca\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279620 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.279701 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.280084 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm98c\" (UniqueName: \"kubernetes.io/projected/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-kube-api-access-vm98c\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.280123 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-service-ca\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.280162 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/144f3608-d338-4452-8bd9-a5fa47914090-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.280173 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.280229 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f3c7091-33a8-4be0-bb55-63300514c205-serving-cert\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.280258 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ltrf\" (UniqueName: \"kubernetes.io/projected/5da2fff0-6264-4369-9c21-d322fa65c6b0-kube-api-access-2ltrf\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.280329 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz6m2\" (UniqueName: \"kubernetes.io/projected/4059535c-148b-4694-8c6f-ee8aae8ddc18-kube-api-access-wz6m2\") pod \"downloads-7954f5f757-c6kf9\" (UID: \"4059535c-148b-4694-8c6f-ee8aae8ddc18\") " pod="openshift-console/downloads-7954f5f757-c6kf9" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.280385 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-config\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.281173 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.281722 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62d5acae-8dde-41c0-bbf4-66d294b8b64b-metrics-tls\") pod \"dns-operator-744455d44c-n5tn2\" (UID: \"62d5acae-8dde-41c0-bbf4-66d294b8b64b\") " pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.281817 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-bound-sa-token\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.281875 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-client-ca\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.281918 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv9sj\" (UniqueName: \"kubernetes.io/projected/0ca53b84-140e-4fbf-b822-03a1c73d04aa-kube-api-access-pv9sj\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.281952 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-client\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.281993 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.282217 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-config\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.282263 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-serving-cert\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.282303 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-policies\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.282334 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.282712 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz"] Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.282737 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:37.782715656 +0000 UTC m=+156.673076759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.283962 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2qgzh"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.283987 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-service-ca\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.285127 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da2fff0-6264-4369-9c21-d322fa65c6b0-serving-cert\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.285816 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.285893 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/144f3608-d338-4452-8bd9-a5fa47914090-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.285891 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-serving-cert\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286175 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ca53b84-140e-4fbf-b822-03a1c73d04aa-metrics-tls\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286210 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-audit\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286261 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-config\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286283 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286309 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286793 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ca53b84-140e-4fbf-b822-03a1c73d04aa-config-volume\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286899 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-service-ca-bundle\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286930 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.286965 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzdck\" (UniqueName: \"kubernetes.io/projected/ec651e57-2be1-4076-93f5-bcfa036b4624-kube-api-access-tzdck\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.287001 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-image-import-ca\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.287396 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da2fff0-6264-4369-9c21-d322fa65c6b0-config\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.287747 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.289821 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.290696 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-serving-cert\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.290868 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f3c7091-33a8-4be0-bb55-63300514c205-serving-cert\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.291593 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3c7091-33a8-4be0-bb55-63300514c205-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.291755 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nn8lp"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.292350 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ca53b84-140e-4fbf-b822-03a1c73d04aa-metrics-tls\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.293492 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-client-ca\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.293659 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: W1129 07:08:37.296263 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85473ad4_f055_4531_a19e_30697cd51568.slice/crio-6c331808ecb9f56f853d2a721ac232f073b91f7599c613da70cf7dfbb98c9a09 WatchSource:0}: Error finding container 6c331808ecb9f56f853d2a721ac232f073b91f7599c613da70cf7dfbb98c9a09: Status 404 returned error can't find the container with id 6c331808ecb9f56f853d2a721ac232f073b91f7599c613da70cf7dfbb98c9a09 Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.296796 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-config\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.297108 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5da2fff0-6264-4369-9c21-d322fa65c6b0-etcd-client\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.319084 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.334233 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.356208 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.357356 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.375384 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.375755 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.388452 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389251 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389340 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxqbj\" (UniqueName: \"kubernetes.io/projected/07261c84-a163-4863-ae02-1fba80ec0b8f-kube-api-access-mxqbj\") pod \"ingress-canary-s2twr\" (UID: \"07261c84-a163-4863-ae02-1fba80ec0b8f\") " pod="openshift-ingress-canary/ingress-canary-s2twr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389440 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcx8n\" (UniqueName: \"kubernetes.io/projected/b9c89890-1965-4fd0-875b-aed6485d9075-kube-api-access-dcx8n\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.389523 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:37.889493561 +0000 UTC m=+156.779854664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389724 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/443884e3-cad9-4f39-944c-af34d6485520-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389760 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47b7935-c3e7-4f98-b361-87ee3b481c3d-secret-volume\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389786 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-default-certificate\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389822 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec651e57-2be1-4076-93f5-bcfa036b4624-images\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389845 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/443884e3-cad9-4f39-944c-af34d6485520-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389872 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389892 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-metrics-certs\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389913 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-trusted-ca\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389931 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6fc2573-9480-47d2-89b0-36b4501ef6e7-config\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389956 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.389979 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390004 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-mountpoint-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390026 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-stats-auth\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390048 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d48db02-9081-4e36-a6db-caa659b1eeb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm9bz\" (UID: \"8d48db02-9081-4e36-a6db-caa659b1eeb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390082 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390101 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-registration-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390136 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca1d17af-b945-4ed5-8e57-e8145d3692b4-apiservice-cert\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390206 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-signing-key\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390615 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62d5acae-8dde-41c0-bbf4-66d294b8b64b-metrics-tls\") pod \"dns-operator-744455d44c-n5tn2\" (UID: \"62d5acae-8dde-41c0-bbf4-66d294b8b64b\") " pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390647 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-config\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390673 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5ngp\" (UniqueName: \"kubernetes.io/projected/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-kube-api-access-r5ngp\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.390718 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-serving-cert\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.391748 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-images\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.392277 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.392426 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-image-import-ca\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393219 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393283 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393296 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393364 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-plugins-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393369 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-trusted-ca\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393453 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbn4\" (UniqueName: \"kubernetes.io/projected/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-kube-api-access-qtbn4\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393549 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-csi-data-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393632 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4d1f56-2467-4d46-80f3-23dd16cd6707-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393678 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-image-import-ca\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393705 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4d1f56-2467-4d46-80f3-23dd16cd6707-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393800 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07261c84-a163-4863-ae02-1fba80ec0b8f-cert\") pod \"ingress-canary-s2twr\" (UID: \"07261c84-a163-4863-ae02-1fba80ec0b8f\") " pod="openshift-ingress-canary/ingress-canary-s2twr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393859 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e911d5c-fa21-47e2-9ab8-12f919978585-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdch2\" (UID: \"5e911d5c-fa21-47e2-9ab8-12f919978585\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393892 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a49785b1-8138-4597-91ad-8d6fd4787286-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wrp7q\" (UID: \"a49785b1-8138-4597-91ad-8d6fd4787286\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393947 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crwmt\" (UniqueName: \"kubernetes.io/projected/a49785b1-8138-4597-91ad-8d6fd4787286-kube-api-access-crwmt\") pod \"multus-admission-controller-857f4d67dd-wrp7q\" (UID: \"a49785b1-8138-4597-91ad-8d6fd4787286\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.393977 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8wg\" (UniqueName: \"kubernetes.io/projected/d639491c-0fbd-44a6-b273-37dcc1e5681d-kube-api-access-6x8wg\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394031 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-signing-cabundle\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394054 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-srv-cert\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394123 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394201 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf5w7\" (UniqueName: \"kubernetes.io/projected/5e911d5c-fa21-47e2-9ab8-12f919978585-kube-api-access-lf5w7\") pod \"cluster-samples-operator-665b6dd947-qdch2\" (UID: \"5e911d5c-fa21-47e2-9ab8-12f919978585\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394232 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8khpr\" (UniqueName: \"kubernetes.io/projected/ca1d17af-b945-4ed5-8e57-e8145d3692b4-kube-api-access-8khpr\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394319 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ths8\" (UniqueName: \"kubernetes.io/projected/62d5acae-8dde-41c0-bbf4-66d294b8b64b-kube-api-access-9ths8\") pod \"dns-operator-744455d44c-n5tn2\" (UID: \"62d5acae-8dde-41c0-bbf4-66d294b8b64b\") " pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394374 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/68682703-479c-473f-8833-7210bc2597c1-node-bootstrap-token\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394401 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca1d17af-b945-4ed5-8e57-e8145d3692b4-webhook-cert\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394456 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1520d3c0-4377-4a22-b7a2-025b6a9ac171-metrics-tls\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394483 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/328a2fcf-7e85-49ad-849c-f32818b5cd87-service-ca-bundle\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394539 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf2cdf59-237b-432e-9e41-c37078755275-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394595 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/68682703-479c-473f-8833-7210bc2597c1-certs\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394625 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/443884e3-cad9-4f39-944c-af34d6485520-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394670 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394701 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec651e57-2be1-4076-93f5-bcfa036b4624-config\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394727 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec651e57-2be1-4076-93f5-bcfa036b4624-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394754 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-etcd-serving-ca\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394800 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.394807 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-registry-tls\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.395052 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1520d3c0-4377-4a22-b7a2-025b6a9ac171-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.395077 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47b7935-c3e7-4f98-b361-87ee3b481c3d-config-volume\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.395108 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.395141 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-registry-certificates\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.396421 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-etcd-serving-ca\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.397715 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.398306 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca1d17af-b945-4ed5-8e57-e8145d3692b4-tmpfs\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.398364 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-proxy-tls\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.398397 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhf2k\" (UniqueName: \"kubernetes.io/projected/328a2fcf-7e85-49ad-849c-f32818b5cd87-kube-api-access-vhf2k\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.398457 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gpv\" (UniqueName: \"kubernetes.io/projected/68682703-479c-473f-8833-7210bc2597c1-kube-api-access-f5gpv\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.399376 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-registry-certificates\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.400443 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.400716 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.401472 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62d5acae-8dde-41c0-bbf4-66d294b8b64b-metrics-tls\") pod \"dns-operator-744455d44c-n5tn2\" (UID: \"62d5acae-8dde-41c0-bbf4-66d294b8b64b\") " pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.401491 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.401590 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf2cdf59-237b-432e-9e41-c37078755275-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.402187 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-registry-tls\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.402509 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.402816 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec651e57-2be1-4076-93f5-bcfa036b4624-images\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.403127 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.403933 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec651e57-2be1-4076-93f5-bcfa036b4624-config\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.407712 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec651e57-2be1-4076-93f5-bcfa036b4624-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.408402 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e911d5c-fa21-47e2-9ab8-12f919978585-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdch2\" (UID: \"5e911d5c-fa21-47e2-9ab8-12f919978585\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.398484 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.408834 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-serving-cert\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.408939 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412004 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnsgb\" (UniqueName: \"kubernetes.io/projected/dd4d1f56-2467-4d46-80f3-23dd16cd6707-kube-api-access-vnsgb\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412298 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412375 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b92s7\" (UniqueName: \"kubernetes.io/projected/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-kube-api-access-b92s7\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412421 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f528\" (UniqueName: \"kubernetes.io/projected/1520d3c0-4377-4a22-b7a2-025b6a9ac171-kube-api-access-4f528\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412461 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42555\" (UniqueName: \"kubernetes.io/projected/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-kube-api-access-42555\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412506 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6fc2573-9480-47d2-89b0-36b4501ef6e7-serving-cert\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz6m2\" (UniqueName: \"kubernetes.io/projected/4059535c-148b-4694-8c6f-ee8aae8ddc18-kube-api-access-wz6m2\") pod \"downloads-7954f5f757-c6kf9\" (UID: \"4059535c-148b-4694-8c6f-ee8aae8ddc18\") " pod="openshift-console/downloads-7954f5f757-c6kf9" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412684 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swq7\" (UniqueName: \"kubernetes.io/projected/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-kube-api-access-7swq7\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.412722 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-bound-sa-token\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.413603 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.414210 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.414252 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-config\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.414279 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-proxy-tls\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.414315 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-srv-cert\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.414341 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-policies\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.415213 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:37.915196995 +0000 UTC m=+156.805558098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.415326 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-config\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.415925 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-policies\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.415973 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416019 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1520d3c0-4377-4a22-b7a2-025b6a9ac171-trusted-ca\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416078 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-audit\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416103 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5vtj\" (UniqueName: \"kubernetes.io/projected/2694d49e-eb78-4db3-b047-2854125b8b26-kube-api-access-t5vtj\") pod \"migrator-59844c95c7-4zskc\" (UID: \"2694d49e-eb78-4db3-b047-2854125b8b26\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416130 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2736b5d-2f13-4ef1-8bed-eadb88be8573-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416157 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2736b5d-2f13-4ef1-8bed-eadb88be8573-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416189 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416213 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5sz\" (UniqueName: \"kubernetes.io/projected/e6fc2573-9480-47d2-89b0-36b4501ef6e7-kube-api-access-sm5sz\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416259 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzdck\" (UniqueName: \"kubernetes.io/projected/ec651e57-2be1-4076-93f5-bcfa036b4624-kube-api-access-tzdck\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416284 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxrv5\" (UniqueName: \"kubernetes.io/projected/d7a41747-97b6-4431-ab85-a990220f34e7-kube-api-access-sxrv5\") pod \"package-server-manager-789f6589d5-dzwrr\" (UID: \"d7a41747-97b6-4431-ab85-a990220f34e7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416311 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416335 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgzwp\" (UniqueName: \"kubernetes.io/projected/8d48db02-9081-4e36-a6db-caa659b1eeb9-kube-api-access-zgzwp\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm9bz\" (UID: \"8d48db02-9081-4e36-a6db-caa659b1eeb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.416377 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-profile-collector-cert\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.418744 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctvdn\" (UniqueName: \"kubernetes.io/projected/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-kube-api-access-ctvdn\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.418791 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a41747-97b6-4431-ab85-a990220f34e7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dzwrr\" (UID: \"d7a41747-97b6-4431-ab85-a990220f34e7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.418823 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-socket-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.418934 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2736b5d-2f13-4ef1-8bed-eadb88be8573-config\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419059 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419130 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-audit-dir\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419448 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419538 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf2cdf59-237b-432e-9e41-c37078755275-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419583 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419629 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk2jc\" (UniqueName: \"kubernetes.io/projected/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-kube-api-access-gk2jc\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419681 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-etcd-client\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419707 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-encryption-config\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419740 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968qq\" (UniqueName: \"kubernetes.io/projected/c47b7935-c3e7-4f98-b361-87ee3b481c3d-kube-api-access-968qq\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419788 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-dir\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419840 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-node-pullsecrets\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419933 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-audit-dir\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.419971 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-node-pullsecrets\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.420002 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-dir\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.421004 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.421118 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-audit\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.421132 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjbw6\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-kube-api-access-qjbw6\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.424111 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.429180 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.431759 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf2cdf59-237b-432e-9e41-c37078755275-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.432087 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-encryption-config\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.434590 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.435385 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.439725 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-etcd-client\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.454813 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.458018 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.479125 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.495282 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.516674 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.523545 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.523836 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-config\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.523875 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5ngp\" (UniqueName: \"kubernetes.io/projected/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-kube-api-access-r5ngp\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.523894 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-images\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.523963 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbn4\" (UniqueName: \"kubernetes.io/projected/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-kube-api-access-qtbn4\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.523986 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-plugins-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524013 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4d1f56-2467-4d46-80f3-23dd16cd6707-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524033 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4d1f56-2467-4d46-80f3-23dd16cd6707-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524052 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-csi-data-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524073 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07261c84-a163-4863-ae02-1fba80ec0b8f-cert\") pod \"ingress-canary-s2twr\" (UID: \"07261c84-a163-4863-ae02-1fba80ec0b8f\") " pod="openshift-ingress-canary/ingress-canary-s2twr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524092 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crwmt\" (UniqueName: \"kubernetes.io/projected/a49785b1-8138-4597-91ad-8d6fd4787286-kube-api-access-crwmt\") pod \"multus-admission-controller-857f4d67dd-wrp7q\" (UID: \"a49785b1-8138-4597-91ad-8d6fd4787286\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524109 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a49785b1-8138-4597-91ad-8d6fd4787286-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wrp7q\" (UID: \"a49785b1-8138-4597-91ad-8d6fd4787286\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524133 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-signing-cabundle\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524148 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-srv-cert\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524171 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8khpr\" (UniqueName: \"kubernetes.io/projected/ca1d17af-b945-4ed5-8e57-e8145d3692b4-kube-api-access-8khpr\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524186 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/68682703-479c-473f-8833-7210bc2597c1-node-bootstrap-token\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524204 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca1d17af-b945-4ed5-8e57-e8145d3692b4-webhook-cert\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524224 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1520d3c0-4377-4a22-b7a2-025b6a9ac171-metrics-tls\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524240 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/328a2fcf-7e85-49ad-849c-f32818b5cd87-service-ca-bundle\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524267 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/68682703-479c-473f-8833-7210bc2597c1-certs\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524286 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/443884e3-cad9-4f39-944c-af34d6485520-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524328 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1520d3c0-4377-4a22-b7a2-025b6a9ac171-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524345 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47b7935-c3e7-4f98-b361-87ee3b481c3d-config-volume\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524376 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca1d17af-b945-4ed5-8e57-e8145d3692b4-tmpfs\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524394 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhf2k\" (UniqueName: \"kubernetes.io/projected/328a2fcf-7e85-49ad-849c-f32818b5cd87-kube-api-access-vhf2k\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524410 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-proxy-tls\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524426 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5gpv\" (UniqueName: \"kubernetes.io/projected/68682703-479c-473f-8833-7210bc2597c1-kube-api-access-f5gpv\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524442 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524471 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnsgb\" (UniqueName: \"kubernetes.io/projected/dd4d1f56-2467-4d46-80f3-23dd16cd6707-kube-api-access-vnsgb\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524488 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524524 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b92s7\" (UniqueName: \"kubernetes.io/projected/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-kube-api-access-b92s7\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524541 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f528\" (UniqueName: \"kubernetes.io/projected/1520d3c0-4377-4a22-b7a2-025b6a9ac171-kube-api-access-4f528\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.524557 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42555\" (UniqueName: \"kubernetes.io/projected/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-kube-api-access-42555\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525061 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6fc2573-9480-47d2-89b0-36b4501ef6e7-serving-cert\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525081 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swq7\" (UniqueName: \"kubernetes.io/projected/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-kube-api-access-7swq7\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525137 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-proxy-tls\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525165 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1520d3c0-4377-4a22-b7a2-025b6a9ac171-trusted-ca\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525186 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-srv-cert\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525207 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5vtj\" (UniqueName: \"kubernetes.io/projected/2694d49e-eb78-4db3-b047-2854125b8b26-kube-api-access-t5vtj\") pod \"migrator-59844c95c7-4zskc\" (UID: \"2694d49e-eb78-4db3-b047-2854125b8b26\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525226 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2736b5d-2f13-4ef1-8bed-eadb88be8573-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525245 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm5sz\" (UniqueName: \"kubernetes.io/projected/e6fc2573-9480-47d2-89b0-36b4501ef6e7-kube-api-access-sm5sz\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525263 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2736b5d-2f13-4ef1-8bed-eadb88be8573-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525294 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxrv5\" (UniqueName: \"kubernetes.io/projected/d7a41747-97b6-4431-ab85-a990220f34e7-kube-api-access-sxrv5\") pod \"package-server-manager-789f6589d5-dzwrr\" (UID: \"d7a41747-97b6-4431-ab85-a990220f34e7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525316 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-profile-collector-cert\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525334 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525356 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgzwp\" (UniqueName: \"kubernetes.io/projected/8d48db02-9081-4e36-a6db-caa659b1eeb9-kube-api-access-zgzwp\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm9bz\" (UID: \"8d48db02-9081-4e36-a6db-caa659b1eeb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525378 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a41747-97b6-4431-ab85-a990220f34e7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dzwrr\" (UID: \"d7a41747-97b6-4431-ab85-a990220f34e7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525407 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-socket-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525429 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2736b5d-2f13-4ef1-8bed-eadb88be8573-config\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.525642 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527448 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527497 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk2jc\" (UniqueName: \"kubernetes.io/projected/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-kube-api-access-gk2jc\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527526 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-968qq\" (UniqueName: \"kubernetes.io/projected/c47b7935-c3e7-4f98-b361-87ee3b481c3d-kube-api-access-968qq\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527529 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-plugins-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527585 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxqbj\" (UniqueName: \"kubernetes.io/projected/07261c84-a163-4863-ae02-1fba80ec0b8f-kube-api-access-mxqbj\") pod \"ingress-canary-s2twr\" (UID: \"07261c84-a163-4863-ae02-1fba80ec0b8f\") " pod="openshift-ingress-canary/ingress-canary-s2twr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527611 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527636 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcx8n\" (UniqueName: \"kubernetes.io/projected/b9c89890-1965-4fd0-875b-aed6485d9075-kube-api-access-dcx8n\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527659 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47b7935-c3e7-4f98-b361-87ee3b481c3d-secret-volume\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527694 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/443884e3-cad9-4f39-944c-af34d6485520-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527762 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-default-certificate\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527808 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/443884e3-cad9-4f39-944c-af34d6485520-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527833 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6fc2573-9480-47d2-89b0-36b4501ef6e7-config\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527852 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-metrics-certs\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527886 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527906 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-mountpoint-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.527940 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-registration-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.528011 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-stats-auth\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.528032 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-config\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.528059 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d48db02-9081-4e36-a6db-caa659b1eeb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm9bz\" (UID: \"8d48db02-9081-4e36-a6db-caa659b1eeb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.528113 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca1d17af-b945-4ed5-8e57-e8145d3692b4-apiservice-cert\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.528160 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-signing-key\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.528865 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca1d17af-b945-4ed5-8e57-e8145d3692b4-tmpfs\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.528892 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-images\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.530652 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4d1f56-2467-4d46-80f3-23dd16cd6707-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.530791 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-csi-data-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.531422 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/328a2fcf-7e85-49ad-849c-f32818b5cd87-service-ca-bundle\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.531495 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a49785b1-8138-4597-91ad-8d6fd4787286-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wrp7q\" (UID: \"a49785b1-8138-4597-91ad-8d6fd4787286\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.531946 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-proxy-tls\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.531958 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.528022 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-registration-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.532341 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca1d17af-b945-4ed5-8e57-e8145d3692b4-webhook-cert\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.532512 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.032484489 +0000 UTC m=+156.922845792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.540737 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-mountpoint-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.540727 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.540880 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9c89890-1965-4fd0-875b-aed6485d9075-socket-dir\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.552598 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.552723 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-srv-cert\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.555247 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2736b5d-2f13-4ef1-8bed-eadb88be8573-config\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.555867 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.559502 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/443884e3-cad9-4f39-944c-af34d6485520-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.560407 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.563079 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/443884e3-cad9-4f39-944c-af34d6485520-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.563248 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca1d17af-b945-4ed5-8e57-e8145d3692b4-apiservice-cert\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.563762 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47b7935-c3e7-4f98-b361-87ee3b481c3d-secret-volume\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.565640 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-stats-auth\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.565872 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.566746 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1520d3c0-4377-4a22-b7a2-025b6a9ac171-metrics-tls\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.568997 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2736b5d-2f13-4ef1-8bed-eadb88be8573-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.570893 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.573233 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.577011 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-metrics-certs\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.577718 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-profile-collector-cert\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.578297 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/328a2fcf-7e85-49ad-849c-f32818b5cd87-default-certificate\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.580655 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.581230 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-proxy-tls\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.581704 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4d1f56-2467-4d46-80f3-23dd16cd6707-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.582034 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1520d3c0-4377-4a22-b7a2-025b6a9ac171-trusted-ca\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.589625 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.601077 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-signing-key\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.601913 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-srv-cert\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.605381 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.614995 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.630280 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.630737 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.13071675 +0000 UTC m=+157.021077913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.636069 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.639346 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-signing-cabundle\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.644890 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-htrhs"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.654156 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.665768 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a41747-97b6-4431-ab85-a990220f34e7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dzwrr\" (UID: \"d7a41747-97b6-4431-ab85-a990220f34e7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:37 crc kubenswrapper[4731]: W1129 07:08:37.669285 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55949699_24bb_4705_8bf0_db1dd651d387.slice/crio-69af07861251c1200e6c7545db480802cadd7c2bf98b48e16e4d149de52f526b WatchSource:0}: Error finding container 69af07861251c1200e6c7545db480802cadd7c2bf98b48e16e4d149de52f526b: Status 404 returned error can't find the container with id 69af07861251c1200e6c7545db480802cadd7c2bf98b48e16e4d149de52f526b Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.672774 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.687785 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/68682703-479c-473f-8833-7210bc2597c1-certs\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.693077 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.695702 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hs7k4"] Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.715062 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.722130 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/68682703-479c-473f-8833-7210bc2597c1-node-bootstrap-token\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.731640 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.732459 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.232433107 +0000 UTC m=+157.122794210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.736313 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.754528 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.758285 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn"] Nov 29 07:08:37 crc kubenswrapper[4731]: W1129 07:08:37.773468 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fa61345_b935_4924_a05b_58a9ec104f07.slice/crio-6d03e5f7c95a490bc7ab4727e31d876b35a47658bcf0d84cd8714c0473da1d6d WatchSource:0}: Error finding container 6d03e5f7c95a490bc7ab4727e31d876b35a47658bcf0d84cd8714c0473da1d6d: Status 404 returned error can't find the container with id 6d03e5f7c95a490bc7ab4727e31d876b35a47658bcf0d84cd8714c0473da1d6d Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.773685 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.781021 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07261c84-a163-4863-ae02-1fba80ec0b8f-cert\") pod \"ingress-canary-s2twr\" (UID: \"07261c84-a163-4863-ae02-1fba80ec0b8f\") " pod="openshift-ingress-canary/ingress-canary-s2twr" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.792631 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.813496 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.819029 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47b7935-c3e7-4f98-b361-87ee3b481c3d-config-volume\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.833645 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.834222 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.334205275 +0000 UTC m=+157.224566378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.834340 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.854154 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.869172 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.882380 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.884216 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.893311 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.913309 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.934509 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.934976 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.935111 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.435079499 +0000 UTC m=+157.325440602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.935391 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:37 crc kubenswrapper[4731]: E1129 07:08:37.936316 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.436290522 +0000 UTC m=+157.326651625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.954351 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 07:08:37 crc kubenswrapper[4731]: I1129 07:08:37.974837 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.013690 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.025373 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6fc2573-9480-47d2-89b0-36b4501ef6e7-config\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.033909 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.036858 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.037066 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.537035212 +0000 UTC m=+157.427396325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.037376 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.037833 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.537821804 +0000 UTC m=+157.428182907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.053052 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.061249 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6fc2573-9480-47d2-89b0-36b4501ef6e7-serving-cert\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.073508 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.093427 4731 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.094671 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" event={"ID":"85473ad4-f055-4531-a19e-30697cd51568","Type":"ContainerStarted","Data":"34af1b4fed6c4623a18a4ebaed0dce32a48ece56a86bfd830de321bb9798ab92"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.094725 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" event={"ID":"85473ad4-f055-4531-a19e-30697cd51568","Type":"ContainerStarted","Data":"6c331808ecb9f56f853d2a721ac232f073b91f7599c613da70cf7dfbb98c9a09"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.095843 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hs7k4" event={"ID":"e80651be-fbdb-464e-876a-c090e2fa0475","Type":"ContainerStarted","Data":"2794d4642e08bc6a95ac93b2412fd0b6abfd6d9cfe33b7216a85c53d3e023bba"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.095876 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hs7k4" event={"ID":"e80651be-fbdb-464e-876a-c090e2fa0475","Type":"ContainerStarted","Data":"e1895327a5159a76cc9ed9d2bd628b9bbf05725426a75ba9af9f6b06eb919c9a"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.096914 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.100657 4731 patch_prober.go:28] interesting pod/console-operator-58897d9998-hs7k4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.100702 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hs7k4" podUID="e80651be-fbdb-464e-876a-c090e2fa0475" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.101778 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-htrhs" event={"ID":"55949699-24bb-4705-8bf0-db1dd651d387","Type":"ContainerStarted","Data":"abdd64ce7fc79e33848fd59c44c41d124dfce45bd7876efa6e2dc1db8861dd54"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.101818 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-htrhs" event={"ID":"55949699-24bb-4705-8bf0-db1dd651d387","Type":"ContainerStarted","Data":"69af07861251c1200e6c7545db480802cadd7c2bf98b48e16e4d149de52f526b"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.103409 4731 generic.go:334] "Generic (PLEG): container finished" podID="0123bcb6-853a-4329-bceb-87a77cd34b27" containerID="eb8cc8f35bfa0e9fc55aec54497fca749b3f8797f473f164ba40abff3ead0d57" exitCode=0 Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.103460 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" event={"ID":"0123bcb6-853a-4329-bceb-87a77cd34b27","Type":"ContainerDied","Data":"eb8cc8f35bfa0e9fc55aec54497fca749b3f8797f473f164ba40abff3ead0d57"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.103478 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" event={"ID":"0123bcb6-853a-4329-bceb-87a77cd34b27","Type":"ContainerStarted","Data":"c5d7ed3eacb08fb4c563f3d9f716f3c0ceed7ec1f9eacc34eba60509773e7946"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.105972 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" event={"ID":"4fa61345-b935-4924-a05b-58a9ec104f07","Type":"ContainerStarted","Data":"ee5cb39c0d390a97d1218943328ca738d1b58ec75ebf19a9cdfa0e8a928a2300"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.106010 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" event={"ID":"4fa61345-b935-4924-a05b-58a9ec104f07","Type":"ContainerStarted","Data":"6d03e5f7c95a490bc7ab4727e31d876b35a47658bcf0d84cd8714c0473da1d6d"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.107746 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" event={"ID":"6e474597-96ba-424e-967d-48c16424ef23","Type":"ContainerStarted","Data":"afbfd4f75bc81d2ef6f7296f1d1c2df20f08e61ddbc515a2049765b196398b9c"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.107781 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" event={"ID":"6e474597-96ba-424e-967d-48c16424ef23","Type":"ContainerStarted","Data":"ec90b3cecb6016605971332668563d6bd581d99f26211ea54826547d47a3cc91"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.112490 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.113141 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"65438d2437fc7b0d6a22fb955f40e59b552f71c7dc1246041b491232bb0907ac"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.113978 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.116209 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" event={"ID":"aa040abb-6524-4abd-834f-18b72a623d16","Type":"ContainerStarted","Data":"5192bb263495f00e5732a6dd207ca5b69f514e7f8b1dfd6944c5329abe43852e"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.116245 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" event={"ID":"aa040abb-6524-4abd-834f-18b72a623d16","Type":"ContainerStarted","Data":"5c51c84780b2c9bc32f72cc9ebed940fe4e18fc0eec03c9c033a278bca948789"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.116411 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.119738 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"aa573a5a046be91338e049fa6ec1468d0190a9a41605ca91d0992b67335c25d1"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.121120 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"58a00eda27e42d3310e3fde8fdbc1803040e422becf3df8f2bab35863355d120"} Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.133792 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.139029 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.139211 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.639175501 +0000 UTC m=+157.529536604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.139611 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.140041 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.640031824 +0000 UTC m=+157.530392927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.146706 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d48db02-9081-4e36-a6db-caa659b1eeb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm9bz\" (UID: \"8d48db02-9081-4e36-a6db-caa659b1eeb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.153745 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.190467 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b99f2\" (UniqueName: \"kubernetes.io/projected/5f3c7091-33a8-4be0-bb55-63300514c205-kube-api-access-b99f2\") pod \"authentication-operator-69f744f599-s72t6\" (UID: \"5f3c7091-33a8-4be0-bb55-63300514c205\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.216472 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmkvx\" (UniqueName: \"kubernetes.io/projected/40778a89-0bd9-4b5d-a024-f2fec55bfa8f-kube-api-access-hmkvx\") pod \"openshift-config-operator-7777fb866f-gw6c8\" (UID: \"40778a89-0bd9-4b5d-a024-f2fec55bfa8f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.236240 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/144f3608-d338-4452-8bd9-a5fa47914090-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.249696 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.250631 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.750586323 +0000 UTC m=+157.640947426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.256510 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ltrf\" (UniqueName: \"kubernetes.io/projected/5da2fff0-6264-4369-9c21-d322fa65c6b0-kube-api-access-2ltrf\") pod \"etcd-operator-b45778765-l8mm6\" (UID: \"5da2fff0-6264-4369-9c21-d322fa65c6b0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.257058 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.271312 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ssp5\" (UniqueName: \"kubernetes.io/projected/144f3608-d338-4452-8bd9-a5fa47914090-kube-api-access-5ssp5\") pod \"cluster-image-registry-operator-dc59b4c8b-sjt7j\" (UID: \"144f3608-d338-4452-8bd9-a5fa47914090\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.290966 4731 request.go:700] Waited for 1.004622034s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.301278 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv9sj\" (UniqueName: \"kubernetes.io/projected/0ca53b84-140e-4fbf-b822-03a1c73d04aa-kube-api-access-pv9sj\") pod \"dns-default-cn2xp\" (UID: \"0ca53b84-140e-4fbf-b822-03a1c73d04aa\") " pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.313629 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm98c\" (UniqueName: \"kubernetes.io/projected/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-kube-api-access-vm98c\") pod \"controller-manager-879f6c89f-4scbk\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.330101 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8wg\" (UniqueName: \"kubernetes.io/projected/d639491c-0fbd-44a6-b273-37dcc1e5681d-kube-api-access-6x8wg\") pod \"oauth-openshift-558db77b4-qg27s\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.331835 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.352403 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.353093 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.853076841 +0000 UTC m=+157.743437944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.358799 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.362271 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf5w7\" (UniqueName: \"kubernetes.io/projected/5e911d5c-fa21-47e2-9ab8-12f919978585-kube-api-access-lf5w7\") pod \"cluster-samples-operator-665b6dd947-qdch2\" (UID: \"5e911d5c-fa21-47e2-9ab8-12f919978585\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.375405 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ths8\" (UniqueName: \"kubernetes.io/projected/62d5acae-8dde-41c0-bbf4-66d294b8b64b-kube-api-access-9ths8\") pod \"dns-operator-744455d44c-n5tn2\" (UID: \"62d5acae-8dde-41c0-bbf4-66d294b8b64b\") " pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.379351 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.403514 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz6m2\" (UniqueName: \"kubernetes.io/projected/4059535c-148b-4694-8c6f-ee8aae8ddc18-kube-api-access-wz6m2\") pod \"downloads-7954f5f757-c6kf9\" (UID: \"4059535c-148b-4694-8c6f-ee8aae8ddc18\") " pod="openshift-console/downloads-7954f5f757-c6kf9" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.423462 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-bound-sa-token\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.452332 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzdck\" (UniqueName: \"kubernetes.io/projected/ec651e57-2be1-4076-93f5-bcfa036b4624-kube-api-access-tzdck\") pod \"machine-api-operator-5694c8668f-xstx4\" (UID: \"ec651e57-2be1-4076-93f5-bcfa036b4624\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.453521 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.454248 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:38.954225772 +0000 UTC m=+157.844586875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.459524 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctvdn\" (UniqueName: \"kubernetes.io/projected/1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe-kube-api-access-ctvdn\") pod \"apiserver-76f77b778f-m7s4c\" (UID: \"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe\") " pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.474088 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.479551 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjbw6\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-kube-api-access-qjbw6\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.498588 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.512603 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8khpr\" (UniqueName: \"kubernetes.io/projected/ca1d17af-b945-4ed5-8e57-e8145d3692b4-kube-api-access-8khpr\") pod \"packageserver-d55dfcdfc-sxxn4\" (UID: \"ca1d17af-b945-4ed5-8e57-e8145d3692b4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.512925 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.533030 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.557990 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.558602 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.058584761 +0000 UTC m=+157.948945884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.559242 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbn4\" (UniqueName: \"kubernetes.io/projected/7efbdd7b-0ed3-493a-ad73-530648c5ce6e-kube-api-access-qtbn4\") pod \"olm-operator-6b444d44fb-4t7gr\" (UID: \"7efbdd7b-0ed3-493a-ad73-530648c5ce6e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.571517 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5ngp\" (UniqueName: \"kubernetes.io/projected/adc9b8a0-7f08-4fbb-ab52-aea81a845c05-kube-api-access-r5ngp\") pod \"service-ca-9c57cc56f-t7s8k\" (UID: \"adc9b8a0-7f08-4fbb-ab52-aea81a845c05\") " pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.580850 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-c6kf9" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.589442 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhf2k\" (UniqueName: \"kubernetes.io/projected/328a2fcf-7e85-49ad-849c-f32818b5cd87-kube-api-access-vhf2k\") pod \"router-default-5444994796-2qd7z\" (UID: \"328a2fcf-7e85-49ad-849c-f32818b5cd87\") " pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.598142 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1520d3c0-4377-4a22-b7a2-025b6a9ac171-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.600178 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.602379 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.606060 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.635226 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5gpv\" (UniqueName: \"kubernetes.io/projected/68682703-479c-473f-8833-7210bc2597c1-kube-api-access-f5gpv\") pod \"machine-config-server-5tpdm\" (UID: \"68682703-479c-473f-8833-7210bc2597c1\") " pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.640196 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnsgb\" (UniqueName: \"kubernetes.io/projected/dd4d1f56-2467-4d46-80f3-23dd16cd6707-kube-api-access-vnsgb\") pod \"kube-storage-version-migrator-operator-b67b599dd-g9mjr\" (UID: \"dd4d1f56-2467-4d46-80f3-23dd16cd6707\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.660498 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.660793 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.160762281 +0000 UTC m=+158.051123384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.660955 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.661458 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.161442049 +0000 UTC m=+158.051803162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.661755 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l8mm6"] Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.662181 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.668044 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3630baf-f7fa-49f0-ae2c-63c28c98c2a8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k9k58\" (UID: \"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.684114 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b92s7\" (UniqueName: \"kubernetes.io/projected/d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4-kube-api-access-b92s7\") pod \"machine-config-operator-74547568cd-d88tr\" (UID: \"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.730288 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.732036 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk2jc\" (UniqueName: \"kubernetes.io/projected/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-kube-api-access-gk2jc\") pod \"marketplace-operator-79b997595-2qgzh\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.740286 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5vtj\" (UniqueName: \"kubernetes.io/projected/2694d49e-eb78-4db3-b047-2854125b8b26-kube-api-access-t5vtj\") pod \"migrator-59844c95c7-4zskc\" (UID: \"2694d49e-eb78-4db3-b047-2854125b8b26\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.745619 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8"] Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.753205 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.763101 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.764622 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.264591005 +0000 UTC m=+158.154952108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.764665 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.765134 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.26512636 +0000 UTC m=+158.155487463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.766321 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxqbj\" (UniqueName: \"kubernetes.io/projected/07261c84-a163-4863-ae02-1fba80ec0b8f-kube-api-access-mxqbj\") pod \"ingress-canary-s2twr\" (UID: \"07261c84-a163-4863-ae02-1fba80ec0b8f\") " pod="openshift-ingress-canary/ingress-canary-s2twr" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.767906 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-968qq\" (UniqueName: \"kubernetes.io/projected/c47b7935-c3e7-4f98-b361-87ee3b481c3d-kube-api-access-968qq\") pod \"collect-profiles-29406660-2pc6s\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.786682 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.797045 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42555\" (UniqueName: \"kubernetes.io/projected/3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3-kube-api-access-42555\") pod \"machine-config-controller-84d6567774-4dbfk\" (UID: \"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.799883 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f528\" (UniqueName: \"kubernetes.io/projected/1520d3c0-4377-4a22-b7a2-025b6a9ac171-kube-api-access-4f528\") pod \"ingress-operator-5b745b69d9-ph5pw\" (UID: \"1520d3c0-4377-4a22-b7a2-025b6a9ac171\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.809839 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:38 crc kubenswrapper[4731]: W1129 07:08:38.830801 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40778a89_0bd9_4b5d_a024_f2fec55bfa8f.slice/crio-93501ab01503f09fea2011a32d7ee2643143d265e0096e03b1174c4562b44e9b WatchSource:0}: Error finding container 93501ab01503f09fea2011a32d7ee2643143d265e0096e03b1174c4562b44e9b: Status 404 returned error can't find the container with id 93501ab01503f09fea2011a32d7ee2643143d265e0096e03b1174c4562b44e9b Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.831048 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.832608 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.837389 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swq7\" (UniqueName: \"kubernetes.io/projected/f9adc895-cdb2-4bd5-87ff-aba173a1e6da-kube-api-access-7swq7\") pod \"catalog-operator-68c6474976-4j6z7\" (UID: \"f9adc895-cdb2-4bd5-87ff-aba173a1e6da\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.837957 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.845884 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.854274 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cn2xp"] Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.855700 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crwmt\" (UniqueName: \"kubernetes.io/projected/a49785b1-8138-4597-91ad-8d6fd4787286-kube-api-access-crwmt\") pod \"multus-admission-controller-857f4d67dd-wrp7q\" (UID: \"a49785b1-8138-4597-91ad-8d6fd4787286\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.856119 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5tpdm" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.863832 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/443884e3-cad9-4f39-944c-af34d6485520-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8w2nm\" (UID: \"443884e3-cad9-4f39-944c-af34d6485520\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.868232 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.887628 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s2twr" Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.887928 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.387893653 +0000 UTC m=+158.278254756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.889353 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgzwp\" (UniqueName: \"kubernetes.io/projected/8d48db02-9081-4e36-a6db-caa659b1eeb9-kube-api-access-zgzwp\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm9bz\" (UID: \"8d48db02-9081-4e36-a6db-caa659b1eeb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.900477 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.908724 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm5sz\" (UniqueName: \"kubernetes.io/projected/e6fc2573-9480-47d2-89b0-36b4501ef6e7-kube-api-access-sm5sz\") pod \"service-ca-operator-777779d784-wbfmm\" (UID: \"e6fc2573-9480-47d2-89b0-36b4501ef6e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.938305 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcx8n\" (UniqueName: \"kubernetes.io/projected/b9c89890-1965-4fd0-875b-aed6485d9075-kube-api-access-dcx8n\") pod \"csi-hostpathplugin-nn8lp\" (UID: \"b9c89890-1965-4fd0-875b-aed6485d9075\") " pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.947986 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.964380 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxrv5\" (UniqueName: \"kubernetes.io/projected/d7a41747-97b6-4431-ab85-a990220f34e7-kube-api-access-sxrv5\") pod \"package-server-manager-789f6589d5-dzwrr\" (UID: \"d7a41747-97b6-4431-ab85-a990220f34e7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.964813 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.976014 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:38 crc kubenswrapper[4731]: E1129 07:08:38.976463 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.476447649 +0000 UTC m=+158.366808752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.979592 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" Nov 29 07:08:38 crc kubenswrapper[4731]: I1129 07:08:38.991855 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2736b5d-2f13-4ef1-8bed-eadb88be8573-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wcj4m\" (UID: \"f2736b5d-2f13-4ef1-8bed-eadb88be8573\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.026071 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.054586 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s72t6"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.077340 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.078329 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.080105 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.580074629 +0000 UTC m=+158.470435912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.091352 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.104376 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.154174 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.155595 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.155964 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.167592 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.181760 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.186679 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cn2xp" event={"ID":"0ca53b84-140e-4fbf-b822-03a1c73d04aa","Type":"ContainerStarted","Data":"63a45481aff28ce306a375ec612e602bb951e5664ca2b6364f6698a27195c7c2"} Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.189371 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.689347582 +0000 UTC m=+158.579708685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.194906 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-c6kf9"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.197429 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2qd7z" event={"ID":"328a2fcf-7e85-49ad-849c-f32818b5cd87","Type":"ContainerStarted","Data":"8b1e8d57513968283a3b02bca5ee2f11678b946f1a462872bb4f9454255dec74"} Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.234492 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" event={"ID":"0123bcb6-853a-4329-bceb-87a77cd34b27","Type":"ContainerStarted","Data":"708f30de698b2b3b1e213ce340b55e163296d4635a97feb6089c1cc0c29af4a1"} Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.264533 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" event={"ID":"40778a89-0bd9-4b5d-a024-f2fec55bfa8f","Type":"ContainerStarted","Data":"93501ab01503f09fea2011a32d7ee2643143d265e0096e03b1174c4562b44e9b"} Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.282530 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.283485 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.783463671 +0000 UTC m=+158.673824774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.283925 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" event={"ID":"5da2fff0-6264-4369-9c21-d322fa65c6b0","Type":"ContainerStarted","Data":"4b31937e1a091b7f7b7b42f477874c4b9fb4e76008f0912068fc46e7c2541a1b"} Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.311964 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" event={"ID":"85473ad4-f055-4531-a19e-30697cd51568","Type":"ContainerStarted","Data":"0cb076cacd038c95e50099bc1db4f81532b13c5928843c7fb2919afeaafb6f47"} Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.391673 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.394320 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.894292526 +0000 UTC m=+158.784653629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.401386 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.438422 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bfdjj" podStartSLOduration=129.438365354 podStartE2EDuration="2m9.438365354s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:39.437560152 +0000 UTC m=+158.327921255" watchObservedRunningTime="2025-11-29 07:08:39.438365354 +0000 UTC m=+158.328726457" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.493113 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.493305 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.993270888 +0000 UTC m=+158.883631991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.493472 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.495246 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:39.995232532 +0000 UTC m=+158.885593705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.594272 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.595198 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.09518254 +0000 UTC m=+158.985543643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.701620 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.701950 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.201936415 +0000 UTC m=+159.092297518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.725374 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.789837 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" podStartSLOduration=127.789811422 podStartE2EDuration="2m7.789811422s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:39.788554658 +0000 UTC m=+158.678915761" watchObservedRunningTime="2025-11-29 07:08:39.789811422 +0000 UTC m=+158.680172525" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.790583 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-htrhs" podStartSLOduration=129.790576383 podStartE2EDuration="2m9.790576383s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:39.757170078 +0000 UTC m=+158.647531191" watchObservedRunningTime="2025-11-29 07:08:39.790576383 +0000 UTC m=+158.680937486" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.803539 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.804262 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.304184966 +0000 UTC m=+159.194546089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.888441 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xstx4"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.888792 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m7s4c"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.891976 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qg27s"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.892078 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4scbk"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.903606 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hs7k4" Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.905057 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:39 crc kubenswrapper[4731]: E1129 07:08:39.905471 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.405457031 +0000 UTC m=+159.295818134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.983255 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4"] Nov 29 07:08:39 crc kubenswrapper[4731]: I1129 07:08:39.992319 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hs7k4" podStartSLOduration=128.99230118 podStartE2EDuration="2m8.99230118s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:39.990217923 +0000 UTC m=+158.880579046" watchObservedRunningTime="2025-11-29 07:08:39.99230118 +0000 UTC m=+158.882662283" Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.006722 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.006932 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.50690772 +0000 UTC m=+159.397268823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.009083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.009415 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.509407139 +0000 UTC m=+159.399768242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.066075 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dcvjn" podStartSLOduration=129.066054361 podStartE2EDuration="2m9.066054361s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:40.029324874 +0000 UTC m=+158.919685977" watchObservedRunningTime="2025-11-29 07:08:40.066054361 +0000 UTC m=+158.956415464" Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.111205 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.111603 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.611523216 +0000 UTC m=+159.501884309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.112370 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.123055 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.623023401 +0000 UTC m=+159.513384494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.216053 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.217245 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.717210322 +0000 UTC m=+159.607571435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.489066 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.489643 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:40.989623085 +0000 UTC m=+159.879984188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.497799 4731 generic.go:334] "Generic (PLEG): container finished" podID="40778a89-0bd9-4b5d-a024-f2fec55bfa8f" containerID="9efd88487898ed4ca84684e26d053102462f132238bb2ae5ab8cdfd1fc126e33" exitCode=0 Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.497898 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" event={"ID":"40778a89-0bd9-4b5d-a024-f2fec55bfa8f","Type":"ContainerDied","Data":"9efd88487898ed4ca84684e26d053102462f132238bb2ae5ab8cdfd1fc126e33"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.499290 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-c6kf9" event={"ID":"4059535c-148b-4694-8c6f-ee8aae8ddc18","Type":"ContainerStarted","Data":"beb090f280c03196fd6db23431bf1a56064ce347833b4e8ad1bffac7228e6e34"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.500254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" event={"ID":"ca1d17af-b945-4ed5-8e57-e8145d3692b4","Type":"ContainerStarted","Data":"121522a8a4eaedc4a010a46746ef92d3ce4abc5d161bded77098c6cecd6c3735"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.501728 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" event={"ID":"ec651e57-2be1-4076-93f5-bcfa036b4624","Type":"ContainerStarted","Data":"356ce23c69600f8180998c68a1a9d6331c2641607168f7c4f14e1553f390c857"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.503074 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" event={"ID":"914f7ecc-b403-4f7e-9a14-3f56a5a256a9","Type":"ContainerStarted","Data":"e757111360666d4c2f70a18dd10910fa38622008b72dc13feacb763888a3f809"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.512808 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" event={"ID":"5da2fff0-6264-4369-9c21-d322fa65c6b0","Type":"ContainerStarted","Data":"6ff0ec17b79b95bb1d2abdbce3414a989b409144de532ad9a840ab3179e5a05a"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.514277 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" event={"ID":"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe","Type":"ContainerStarted","Data":"420ae01d19311ba8fa5f6ba68cb190ea306038a01ab8586b5129e9641b56d3e2"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.515129 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" event={"ID":"d639491c-0fbd-44a6-b273-37dcc1e5681d","Type":"ContainerStarted","Data":"7201e6afc963536bcfa89bae37c7d3b2c0f4c5fe77a042b6470f16bf62fc67d1"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.516468 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5tpdm" event={"ID":"68682703-479c-473f-8833-7210bc2597c1","Type":"ContainerStarted","Data":"e58fcfa2c56728bf1ba9eb3110f13f60b1cb4f83d4286cfcb4ed69b234b458c5"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.517714 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" event={"ID":"5f3c7091-33a8-4be0-bb55-63300514c205","Type":"ContainerStarted","Data":"8c1cbd2d8ae6b6014b7c6196e08e3977532388c655e94decb0bc33bda2e31df4"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.523523 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" event={"ID":"144f3608-d338-4452-8bd9-a5fa47914090","Type":"ContainerStarted","Data":"6688160d53726a9d7957ff3ad5b6433fdcde38a7d6b6ec90f2b44a104995437f"} Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.590917 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.591136 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.091108006 +0000 UTC m=+159.981469109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.591275 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.591747 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.091720082 +0000 UTC m=+159.982081185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.696998 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.697199 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.197156441 +0000 UTC m=+160.087517544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.697708 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.699466 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.198183149 +0000 UTC m=+160.088544252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.799455 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.799859 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.299834374 +0000 UTC m=+160.190195477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.840000 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-l8mm6" podStartSLOduration=129.838909215 podStartE2EDuration="2m9.838909215s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:40.837076434 +0000 UTC m=+159.727437537" watchObservedRunningTime="2025-11-29 07:08:40.838909215 +0000 UTC m=+159.729270318" Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.884116 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" podStartSLOduration=128.884089352 podStartE2EDuration="2m8.884089352s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:40.8748812 +0000 UTC m=+159.765242303" watchObservedRunningTime="2025-11-29 07:08:40.884089352 +0000 UTC m=+159.774450455" Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.901717 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:40 crc kubenswrapper[4731]: E1129 07:08:40.902126 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.402113156 +0000 UTC m=+160.292474259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:40 crc kubenswrapper[4731]: I1129 07:08:40.991439 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bc855" podStartSLOduration=130.991413293 podStartE2EDuration="2m10.991413293s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:40.989004647 +0000 UTC m=+159.879365750" watchObservedRunningTime="2025-11-29 07:08:40.991413293 +0000 UTC m=+159.881774396" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.009761 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.010287 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.510266789 +0000 UTC m=+160.400627892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.112099 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.113475 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.613460517 +0000 UTC m=+160.503821610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.129148 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.145050 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.162094 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.173460 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n5tn2"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.214467 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.215116 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.715074681 +0000 UTC m=+160.605435784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.222132 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.222613 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.722593946 +0000 UTC m=+160.612955049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: W1129 07:08:41.271881 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2694d49e_eb78_4db3_b047_2854125b8b26.slice/crio-58dc007314b7052be5f88ebb3799938e4ae8057f30f57b256f0548b8e0c3d99a WatchSource:0}: Error finding container 58dc007314b7052be5f88ebb3799938e4ae8057f30f57b256f0548b8e0c3d99a: Status 404 returned error can't find the container with id 58dc007314b7052be5f88ebb3799938e4ae8057f30f57b256f0548b8e0c3d99a Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.326678 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.327170 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.827142951 +0000 UTC m=+160.717504054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.428852 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.429714 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:41.929696801 +0000 UTC m=+160.820057904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.466859 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.467049 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.472164 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wrp7q"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.490691 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.507759 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.512999 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t7s8k"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.513882 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.516812 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.538229 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.538415 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.038383078 +0000 UTC m=+160.928744181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.538699 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.539105 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.039088928 +0000 UTC m=+160.929450031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.557245 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" event={"ID":"144f3608-d338-4452-8bd9-a5fa47914090","Type":"ContainerStarted","Data":"50642a78536a7bc32c37438377391377c900c16ca7f1c5466de4ba90a75c88de"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.571069 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2qgzh"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.576272 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.582709 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.587782 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.599641 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s2twr"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.603204 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" event={"ID":"40778a89-0bd9-4b5d-a024-f2fec55bfa8f","Type":"ContainerStarted","Data":"7fb88a90d3d956b29682df5adb9cc02da77f032059b7a008ea65a66255d7534a"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.603257 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.604213 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjt7j" podStartSLOduration=130.604182091 podStartE2EDuration="2m10.604182091s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.591895064 +0000 UTC m=+160.482256167" watchObservedRunningTime="2025-11-29 07:08:41.604182091 +0000 UTC m=+160.494543204" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.612915 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nn8lp"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.622448 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" event={"ID":"ca1d17af-b945-4ed5-8e57-e8145d3692b4","Type":"ContainerStarted","Data":"ab35f463cb00ddbd3501cbd7de7c75ec708bc0c6cfaa8883bc705fa593f37893"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.622682 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.633736 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.637237 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" podStartSLOduration=131.637216946 podStartE2EDuration="2m11.637216946s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.631438038 +0000 UTC m=+160.521799141" watchObservedRunningTime="2025-11-29 07:08:41.637216946 +0000 UTC m=+160.527578049" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.638477 4731 generic.go:334] "Generic (PLEG): container finished" podID="1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe" containerID="d65e3466b794cb09d853a3e0f577d2df1c5d7208fd29faadeb46e0254cbc4b90" exitCode=0 Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.639200 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" event={"ID":"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe","Type":"ContainerDied","Data":"d65e3466b794cb09d853a3e0f577d2df1c5d7208fd29faadeb46e0254cbc4b90"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.639593 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.640679 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.14066336 +0000 UTC m=+161.031024463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.644333 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" event={"ID":"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8","Type":"ContainerStarted","Data":"efeddffa15f19c350a0c790fecd518ff212c25d3b84aafa8ab0c69a0c56d4c07"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.648263 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" event={"ID":"d639491c-0fbd-44a6-b273-37dcc1e5681d","Type":"ContainerStarted","Data":"0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.649429 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.652605 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz"] Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.654291 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" podStartSLOduration=129.654274023 podStartE2EDuration="2m9.654274023s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.651869147 +0000 UTC m=+160.542230260" watchObservedRunningTime="2025-11-29 07:08:41.654274023 +0000 UTC m=+160.544635116" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.668794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" event={"ID":"5e911d5c-fa21-47e2-9ab8-12f919978585","Type":"ContainerStarted","Data":"f2055d90b014bcbd6d368663ccc11b361a19a46f77f77eed16fb62ba926401ca"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.668857 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" event={"ID":"5e911d5c-fa21-47e2-9ab8-12f919978585","Type":"ContainerStarted","Data":"bd4955963c82bb9e5cf8ab7c01bd90157c95816438517e037100ca8c6a9535bc"} Nov 29 07:08:41 crc kubenswrapper[4731]: W1129 07:08:41.669704 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod443884e3_cad9_4f39_944c_af34d6485520.slice/crio-b315f46dd11cb721898d5a2e2abf3736df774ee11b4014c51870b608e4755ac1 WatchSource:0}: Error finding container b315f46dd11cb721898d5a2e2abf3736df774ee11b4014c51870b608e4755ac1: Status 404 returned error can't find the container with id b315f46dd11cb721898d5a2e2abf3736df774ee11b4014c51870b608e4755ac1 Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.671295 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" event={"ID":"62d5acae-8dde-41c0-bbf4-66d294b8b64b","Type":"ContainerStarted","Data":"92d343c280fe8b9e7db0fc5b2319fe8ba31ea38f234d72b2e8fa1691dfaee2e9"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.672812 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" event={"ID":"2694d49e-eb78-4db3-b047-2854125b8b26","Type":"ContainerStarted","Data":"58dc007314b7052be5f88ebb3799938e4ae8057f30f57b256f0548b8e0c3d99a"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.676901 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" event={"ID":"ec651e57-2be1-4076-93f5-bcfa036b4624","Type":"ContainerStarted","Data":"cea86bbd7f9beb8a712f512eb523917990afc7375a0e357b4ab5430af263a16d"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.679211 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" podStartSLOduration=131.679187816 podStartE2EDuration="2m11.679187816s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.676228475 +0000 UTC m=+160.566589578" watchObservedRunningTime="2025-11-29 07:08:41.679187816 +0000 UTC m=+160.569548919" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.690843 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" event={"ID":"5f3c7091-33a8-4be0-bb55-63300514c205","Type":"ContainerStarted","Data":"f6a3ca2cf7f79c9aec94cc4ea6806a1da5d17e924b0546c266826e1e25cc2ae8"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.693113 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cn2xp" event={"ID":"0ca53b84-140e-4fbf-b822-03a1c73d04aa","Type":"ContainerStarted","Data":"a4e159c187787654cc35cbd3990b81af78952a072fc2cc3f6b9bd9b786a1861c"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.695653 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5tpdm" event={"ID":"68682703-479c-473f-8833-7210bc2597c1","Type":"ContainerStarted","Data":"f2b991e310a785d886e3264835f9d6931bb268a5f1ff30a6d1e7ebae8f160ccd"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.718747 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2qd7z" event={"ID":"328a2fcf-7e85-49ad-849c-f32818b5cd87","Type":"ContainerStarted","Data":"704d1779db7f602adee22ec18fcbcb3e3b8e7d18bb4d63da5352b831aa3c70b1"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.720944 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5tpdm" podStartSLOduration=6.720922459 podStartE2EDuration="6.720922459s" podCreationTimestamp="2025-11-29 07:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.720249541 +0000 UTC m=+160.610610644" watchObservedRunningTime="2025-11-29 07:08:41.720922459 +0000 UTC m=+160.611283562" Nov 29 07:08:41 crc kubenswrapper[4731]: W1129 07:08:41.743004 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc47b7935_c3e7_4f98_b361_87ee3b481c3d.slice/crio-20d0d52e77404128409c3d217c1a6f884068e85d83ed36a887e39de2dba5188e WatchSource:0}: Error finding container 20d0d52e77404128409c3d217c1a6f884068e85d83ed36a887e39de2dba5188e: Status 404 returned error can't find the container with id 20d0d52e77404128409c3d217c1a6f884068e85d83ed36a887e39de2dba5188e Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.743300 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" event={"ID":"dd4d1f56-2467-4d46-80f3-23dd16cd6707","Type":"ContainerStarted","Data":"456495f711b374c97a6812877f06c8999103f0271cb69f2bf40fa7b97d2ad9e9"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.744516 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.749639 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.249617376 +0000 UTC m=+161.139978479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.758107 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-c6kf9" event={"ID":"4059535c-148b-4694-8c6f-ee8aae8ddc18","Type":"ContainerStarted","Data":"b9123fd63598ffd73ae289b14f46dd6c2fc8e9a2a7292ae029bd157c5e120d95"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.758291 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-c6kf9" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.846390 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-s72t6" podStartSLOduration=131.846367956 podStartE2EDuration="2m11.846367956s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.755508147 +0000 UTC m=+160.645869250" watchObservedRunningTime="2025-11-29 07:08:41.846367956 +0000 UTC m=+160.736729059" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.850319 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-c6kf9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.850372 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-c6kf9" podUID="4059535c-148b-4694-8c6f-ee8aae8ddc18" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.851290 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.852100 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.352076523 +0000 UTC m=+161.242437636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.854708 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:41 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:41 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:41 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.854742 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.872086 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2qd7z" podStartSLOduration=130.87207245 podStartE2EDuration="2m10.87207245s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.868828692 +0000 UTC m=+160.759189795" watchObservedRunningTime="2025-11-29 07:08:41.87207245 +0000 UTC m=+160.762433553" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.873559 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.873603 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" event={"ID":"914f7ecc-b403-4f7e-9a14-3f56a5a256a9","Type":"ContainerStarted","Data":"50b5f117a4792262b555f6a404bea2ac5bf8be1c611bd576da57941d9f65ddc2"} Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.873635 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.879192 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.888923 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-c6kf9" podStartSLOduration=130.888903032 podStartE2EDuration="2m10.888903032s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.886198377 +0000 UTC m=+160.776559470" watchObservedRunningTime="2025-11-29 07:08:41.888903032 +0000 UTC m=+160.779264135" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.910049 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" podStartSLOduration=130.91001853 podStartE2EDuration="2m10.91001853s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:41.904063407 +0000 UTC m=+160.794424510" watchObservedRunningTime="2025-11-29 07:08:41.91001853 +0000 UTC m=+160.800379633" Nov 29 07:08:41 crc kubenswrapper[4731]: I1129 07:08:41.955911 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:41 crc kubenswrapper[4731]: E1129 07:08:41.956359 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.456340359 +0000 UTC m=+161.346701462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.064640 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.065224 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.565202902 +0000 UTC m=+161.455564005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.166473 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.166989 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.66697173 +0000 UTC m=+161.557332833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.273801 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.274371 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.774352892 +0000 UTC m=+161.664713995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.289712 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.290507 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.306984 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.376086 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.379033 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.879010769 +0000 UTC m=+161.769371872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.443798 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.480599 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.481111 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:42.981089076 +0000 UTC m=+161.871450179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.582284 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.582849 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.082829563 +0000 UTC m=+161.973190666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.626626 4731 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sxxn4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.626788 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" podUID="ca1d17af-b945-4ed5-8e57-e8145d3692b4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.691223 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.691845 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.191824119 +0000 UTC m=+162.082185232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.798471 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.799202 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.299186921 +0000 UTC m=+162.189548024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.849269 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:42 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:42 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:42 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.849369 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.913155 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:42 crc kubenswrapper[4731]: E1129 07:08:42.913633 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.413611266 +0000 UTC m=+162.303972369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.924718 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" event={"ID":"2694d49e-eb78-4db3-b047-2854125b8b26","Type":"ContainerStarted","Data":"6d6c1e36d970c6999c021336427ec4533cf22a1629a82229b69337ade9c13e14"} Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.942496 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" event={"ID":"443884e3-cad9-4f39-944c-af34d6485520","Type":"ContainerStarted","Data":"b315f46dd11cb721898d5a2e2abf3736df774ee11b4014c51870b608e4755ac1"} Nov 29 07:08:42 crc kubenswrapper[4731]: I1129 07:08:42.989339 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" podStartSLOduration=131.989319349 podStartE2EDuration="2m11.989319349s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:42.988793034 +0000 UTC m=+161.879154137" watchObservedRunningTime="2025-11-29 07:08:42.989319349 +0000 UTC m=+161.879680452" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.015948 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.016345 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.516327169 +0000 UTC m=+162.406688272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.022363 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" event={"ID":"8d48db02-9081-4e36-a6db-caa659b1eeb9","Type":"ContainerStarted","Data":"623e211e3fb8b564ebfba438d2bd8eddc82f59b521afdc0d892183edfcfdf907"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.081236 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" event={"ID":"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3","Type":"ContainerStarted","Data":"2b37223267ebb40961ca69756a17cb48302d11a2d7bf87be79b26255af29ea70"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.081631 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" event={"ID":"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3","Type":"ContainerStarted","Data":"03d3b49c3a6fd29100e1bbb9c5382fbd0cce1cd916c198505e1a5129511edabe"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.096644 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" event={"ID":"d7a41747-97b6-4431-ab85-a990220f34e7","Type":"ContainerStarted","Data":"8bf5203d46d2e2be45a12e8fabb23d9fa27020a85533b076c9a0ab94700bf0f7"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.110047 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" event={"ID":"7efbdd7b-0ed3-493a-ad73-530648c5ce6e","Type":"ContainerStarted","Data":"22f32194aa5c85e14a32d12e89555a54d45266c4b05ec3e3c49571dda1276d00"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.163555 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.163984 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.663965464 +0000 UTC m=+162.554326567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.172735 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cn2xp" event={"ID":"0ca53b84-140e-4fbf-b822-03a1c73d04aa","Type":"ContainerStarted","Data":"c9d7982be4ec9802bcddff4e0a07e6c2899d1d5a3ff98e585198b121df56c060"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.173249 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.198949 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" event={"ID":"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4","Type":"ContainerStarted","Data":"bd5b71f7aa08909a3fdbd969a5aad5bf9446a73825559d6f27891017cf74c81f"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.199015 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" event={"ID":"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4","Type":"ContainerStarted","Data":"cd5bf20d63be9e58fdc5e65c4d8f8c845f157a947ee036c31ca5c4adf5b5aec8"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.221435 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" event={"ID":"adc9b8a0-7f08-4fbb-ab52-aea81a845c05","Type":"ContainerStarted","Data":"f74a717204ca5a693ea434c2c3b49c7495d8e02021d0499f5efa015d68735ccd"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.229080 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-cn2xp" podStartSLOduration=8.229049687 podStartE2EDuration="8.229049687s" podCreationTimestamp="2025-11-29 07:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:43.226186438 +0000 UTC m=+162.116547561" watchObservedRunningTime="2025-11-29 07:08:43.229049687 +0000 UTC m=+162.119410790" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.269388 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" event={"ID":"ec651e57-2be1-4076-93f5-bcfa036b4624","Type":"ContainerStarted","Data":"8d609293f9dc57425b7821889c0dafd1afdc8961da62d9ce13289a0badc4c3e1"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.270677 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.272347 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.772326003 +0000 UTC m=+162.662687106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.290215 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" event={"ID":"f9adc895-cdb2-4bd5-87ff-aba173a1e6da","Type":"ContainerStarted","Data":"e96a37640effd81313fff37326aecb0b807bb12df90507e79c31ab023c0c57f9"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.359000 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" event={"ID":"e6fc2573-9480-47d2-89b0-36b4501ef6e7","Type":"ContainerStarted","Data":"4ff34a92fbab10d56cda17f1b2bd528fe23a37f62aaa1aae190134ffddc02754"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.372894 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.376088 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.876049294 +0000 UTC m=+162.766410397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.376184 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xstx4" podStartSLOduration=131.376160477 podStartE2EDuration="2m11.376160477s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:43.319946737 +0000 UTC m=+162.210307840" watchObservedRunningTime="2025-11-29 07:08:43.376160477 +0000 UTC m=+162.266521590" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.383526 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" event={"ID":"dd4d1f56-2467-4d46-80f3-23dd16cd6707","Type":"ContainerStarted","Data":"d4df603e9f1b9b0432c6231f7378866798bba827260b353becead5e14c65da7d"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.431372 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g9mjr" podStartSLOduration=131.431348799 podStartE2EDuration="2m11.431348799s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:43.427443592 +0000 UTC m=+162.317804695" watchObservedRunningTime="2025-11-29 07:08:43.431348799 +0000 UTC m=+162.321709902" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.463678 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" event={"ID":"62d5acae-8dde-41c0-bbf4-66d294b8b64b","Type":"ContainerStarted","Data":"0eba4e16170cd3530ff4479197525fed77a107f3b0eac9240c361a0ebeb22c44"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.475000 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.475524 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:43.975506099 +0000 UTC m=+162.865867202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.534919 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s2twr" event={"ID":"07261c84-a163-4863-ae02-1fba80ec0b8f","Type":"ContainerStarted","Data":"beff2cd3697390401b1a6af3f3cc5a33e1d20d8eb4cf1743affa0a8e462f09c8"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.560015 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" event={"ID":"c3630baf-f7fa-49f0-ae2c-63c28c98c2a8","Type":"ContainerStarted","Data":"856e41185ad57d29c49b5aa38fcae4d8d09650e6f23ef1231c8b5ff155c4e909"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.576137 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.576640 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.076618849 +0000 UTC m=+162.966979952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.579336 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" event={"ID":"b9c89890-1965-4fd0-875b-aed6485d9075","Type":"ContainerStarted","Data":"8283f4be28c0997b1eb21910c163d98dedb23ae807dd60a9a8bdc064c4fddee5"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.586813 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" event={"ID":"1520d3c0-4377-4a22-b7a2-025b6a9ac171","Type":"ContainerStarted","Data":"14cbce27f19577cd59532e1296582239ac3fb11c3d133a907aa34d7462e324dd"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.595713 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-s2twr" podStartSLOduration=8.595528967 podStartE2EDuration="8.595528967s" podCreationTimestamp="2025-11-29 07:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:43.587550119 +0000 UTC m=+162.477911222" watchObservedRunningTime="2025-11-29 07:08:43.595528967 +0000 UTC m=+162.485890070" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.615870 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" event={"ID":"a49785b1-8138-4597-91ad-8d6fd4787286","Type":"ContainerStarted","Data":"c3dabfcfa5586c7e21663718bf67e8fe76854032d316bd9ea3a7f23bb2f3dcb0"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.620851 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" event={"ID":"c47b7935-c3e7-4f98-b361-87ee3b481c3d","Type":"ContainerStarted","Data":"20d0d52e77404128409c3d217c1a6f884068e85d83ed36a887e39de2dba5188e"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.627940 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" event={"ID":"f2736b5d-2f13-4ef1-8bed-eadb88be8573","Type":"ContainerStarted","Data":"cb1383b80ca0d4cea0f00f56ef706f4c37100e36764e13a266963eb11722a85f"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.642188 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k9k58" podStartSLOduration=132.642157125 podStartE2EDuration="2m12.642157125s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:43.635858912 +0000 UTC m=+162.526220025" watchObservedRunningTime="2025-11-29 07:08:43.642157125 +0000 UTC m=+162.532518228" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.665593 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" podStartSLOduration=133.665549636 podStartE2EDuration="2m13.665549636s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:43.661302499 +0000 UTC m=+162.551663602" watchObservedRunningTime="2025-11-29 07:08:43.665549636 +0000 UTC m=+162.555910739" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.678163 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.680433 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.180416323 +0000 UTC m=+163.070777426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.716413 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" event={"ID":"5e911d5c-fa21-47e2-9ab8-12f919978585","Type":"ContainerStarted","Data":"649692c2a14d34fac2e5cf0ebdd8471fa24a40e98d91bc0b5b17ee028431324c"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.738778 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" event={"ID":"8f435c3d-3db2-44dc-8a50-ea8f9475daa0","Type":"ContainerStarted","Data":"cf6a41e232ff2e8802393b7aa4d13aa723658fe49f1b01ed0c49aad89c7a342c"} Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.738848 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.742553 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-c6kf9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.742669 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-c6kf9" podUID="4059535c-148b-4694-8c6f-ee8aae8ddc18" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.746414 4731 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2qgzh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.746467 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.750596 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdch2" podStartSLOduration=133.750543814 podStartE2EDuration="2m13.750543814s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:43.739868812 +0000 UTC m=+162.630229915" watchObservedRunningTime="2025-11-29 07:08:43.750543814 +0000 UTC m=+162.640904917" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.756300 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxxn4" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.767952 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p7p7h" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.781617 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" podStartSLOduration=131.781592075 podStartE2EDuration="2m11.781592075s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:43.781579595 +0000 UTC m=+162.671940698" watchObservedRunningTime="2025-11-29 07:08:43.781592075 +0000 UTC m=+162.671953178" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.782622 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.783491 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.283424505 +0000 UTC m=+163.173785608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.878723 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:43 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:43 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:43 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.878777 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:43 crc kubenswrapper[4731]: I1129 07:08:43.884944 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:43 crc kubenswrapper[4731]: E1129 07:08:43.893205 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.393054319 +0000 UTC m=+163.283415422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.006980 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.008896 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.508849511 +0000 UTC m=+163.399210614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.114858 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.115321 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.615304308 +0000 UTC m=+163.505665411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.227641 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.228617 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.728593312 +0000 UTC m=+163.618954415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.302146 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gw6c8" Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.329619 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.330085 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.830066802 +0000 UTC m=+163.720427915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.431253 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.431797 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:44.931773058 +0000 UTC m=+163.822134161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.536522 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.536984 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.03696495 +0000 UTC m=+163.927326043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.638704 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.639259 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.139231122 +0000 UTC m=+164.029592225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.740903 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.741329 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.241311439 +0000 UTC m=+164.131672542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.753744 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" event={"ID":"1520d3c0-4377-4a22-b7a2-025b6a9ac171","Type":"ContainerStarted","Data":"bfcc3f17cd006f9987027b2e216f633648c956120f54c36a8f17bb575e6b3848"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.753844 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" event={"ID":"1520d3c0-4377-4a22-b7a2-025b6a9ac171","Type":"ContainerStarted","Data":"bb0a01044278f4f1ebb656e696cf11e4088c7e9499e08b6b215f5aff8de52350"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.756378 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" event={"ID":"3e3b7a64-8ec0-4b5e-86da-0c9d6d0428f3","Type":"ContainerStarted","Data":"0c5e9621442b4c6517075c6acde3f9e235c64b4e3e12035dc816a56d6770336a"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.760112 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" event={"ID":"f9adc895-cdb2-4bd5-87ff-aba173a1e6da","Type":"ContainerStarted","Data":"c19779fed41e218718c5c7048ee9594d530a503ca73a8260bc62f379089f688d"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.761227 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.765611 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s2twr" event={"ID":"07261c84-a163-4863-ae02-1fba80ec0b8f","Type":"ContainerStarted","Data":"592867c0dbad15b4efd4095139c0179b5137c7fc6a267c64a959d5d303a716ca"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.770306 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.784110 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" event={"ID":"a49785b1-8138-4597-91ad-8d6fd4787286","Type":"ContainerStarted","Data":"5041f7efaa497f3fbe90923536321772f4c1f8ce19944b3ba483df4455d09f45"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.784192 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" event={"ID":"a49785b1-8138-4597-91ad-8d6fd4787286","Type":"ContainerStarted","Data":"44d12fcc8070f1e1ab7001f01ef4d8fef9e1596ce6ce3f4f0cb587295169d0f0"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.800015 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" event={"ID":"8d48db02-9081-4e36-a6db-caa659b1eeb9","Type":"ContainerStarted","Data":"3dd3d4110b38d0729283bb3b41a94c27e88abaac8f0923c5bf06ab0b9bdbabb0"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.802587 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ph5pw" podStartSLOduration=133.802540846 podStartE2EDuration="2m13.802540846s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:44.798583998 +0000 UTC m=+163.688945101" watchObservedRunningTime="2025-11-29 07:08:44.802540846 +0000 UTC m=+163.692901949" Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.812551 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" event={"ID":"e6fc2573-9480-47d2-89b0-36b4501ef6e7","Type":"ContainerStarted","Data":"39dc06d525a46fb9ec1d35bfdd9e25076ee32db9ca1c766db2408dfd3d7b16fd"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.820201 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" event={"ID":"d7a41747-97b6-4431-ab85-a990220f34e7","Type":"ContainerStarted","Data":"524329cd0c9e71d227251cdc530757ff9da11787cf202e599721d05277be4bf6"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.820261 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" event={"ID":"d7a41747-97b6-4431-ab85-a990220f34e7","Type":"ContainerStarted","Data":"d5d8762b5bd9532b0f1a1d2d5602b9a537fca1ffbf4de705dda258897d66c1fd"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.821081 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.831634 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" event={"ID":"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe","Type":"ContainerStarted","Data":"a759a726976ee20306ae3d9dc8191c9f938418293b902e560cdfc9721fa5fe01"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.831691 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" event={"ID":"1235c0c3-c8ad-4cff-80f0-56bfdfa8dabe","Type":"ContainerStarted","Data":"1e9dea66c350b3cf4568bbdc2f5b1f05ab5ce0428edae284e27d44e7cd4f361f"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.833931 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" event={"ID":"c47b7935-c3e7-4f98-b361-87ee3b481c3d","Type":"ContainerStarted","Data":"a4af8ae7a2f8e44ed74f30897484aae9bfb6907076b4bebcc5abca4e110996ce"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.841649 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.842769 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.342739098 +0000 UTC m=+164.233100231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.843308 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8w2nm" event={"ID":"443884e3-cad9-4f39-944c-af34d6485520","Type":"ContainerStarted","Data":"3adcccd7bdb12cd592a2dd5b7ec11a4d4c0df0af2456133d3c95fb53fe620e14"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.843730 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.846873 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:44 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:44 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:44 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.846958 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.847551 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.347532639 +0000 UTC m=+164.237893742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.866588 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wrp7q" podStartSLOduration=132.86653894 podStartE2EDuration="2m12.86653894s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:44.866121228 +0000 UTC m=+163.756482331" watchObservedRunningTime="2025-11-29 07:08:44.86653894 +0000 UTC m=+163.756900043" Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.881938 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" event={"ID":"62d5acae-8dde-41c0-bbf4-66d294b8b64b","Type":"ContainerStarted","Data":"0cc79b819302509175070fd999922a399718510c2b0df7046d8ac70794f5b612"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.923192 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" event={"ID":"f2736b5d-2f13-4ef1-8bed-eadb88be8573","Type":"ContainerStarted","Data":"bc8e9911b60f5511da0f4206366ccf0f09bab3b0c84337ddd571407a5e94114b"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.955242 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.956646 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4j6z7" podStartSLOduration=132.956615467 podStartE2EDuration="2m12.956615467s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:44.911396299 +0000 UTC m=+163.801757402" watchObservedRunningTime="2025-11-29 07:08:44.956615467 +0000 UTC m=+163.846976570" Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.956907 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dbfk" podStartSLOduration=132.956901935 podStartE2EDuration="2m12.956901935s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:44.955113126 +0000 UTC m=+163.845474229" watchObservedRunningTime="2025-11-29 07:08:44.956901935 +0000 UTC m=+163.847263038" Nov 29 07:08:44 crc kubenswrapper[4731]: E1129 07:08:44.957313 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.457291776 +0000 UTC m=+164.347652879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.963091 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" event={"ID":"2694d49e-eb78-4db3-b047-2854125b8b26","Type":"ContainerStarted","Data":"75fe712a2c1ea994bbfb2646fc215ecb47e86abe00d084ba8b9ca79b2310108f"} Nov 29 07:08:44 crc kubenswrapper[4731]: I1129 07:08:44.991953 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" event={"ID":"b9c89890-1965-4fd0-875b-aed6485d9075","Type":"ContainerStarted","Data":"92ed98e238520b559842037d6a5fe52d0be6557946a87eac39b0b0c0731b15fd"} Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.007837 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wbfmm" podStartSLOduration=133.00781301 podStartE2EDuration="2m13.00781301s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.006298379 +0000 UTC m=+163.896659482" watchObservedRunningTime="2025-11-29 07:08:45.00781301 +0000 UTC m=+163.898174123" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.011765 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" event={"ID":"8f435c3d-3db2-44dc-8a50-ea8f9475daa0","Type":"ContainerStarted","Data":"554048ef46b8c551becfb76f96eecd2c7c6785e00a1739dac1dcc22fc89dd27d"} Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.018871 4731 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2qgzh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.018949 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.030045 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" event={"ID":"d4b50f02-42a0-4d00-a8f1-e37bc8af1ef4","Type":"ContainerStarted","Data":"d1abc6abba38525b32d4388e48b946699c1e931a4e08a354e52891930a65e83b"} Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.059196 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.059635 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.559618129 +0000 UTC m=+164.449979242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.060091 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" event={"ID":"adc9b8a0-7f08-4fbb-ab52-aea81a845c05","Type":"ContainerStarted","Data":"3b50faf46686470812ce343efd3811305912a4aa09b5fbaafca767379fc5b18d"} Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.078493 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n5tn2" podStartSLOduration=134.078466646 podStartE2EDuration="2m14.078466646s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.07498544 +0000 UTC m=+163.965346543" watchObservedRunningTime="2025-11-29 07:08:45.078466646 +0000 UTC m=+163.968827739" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.084501 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" event={"ID":"7efbdd7b-0ed3-493a-ad73-530648c5ce6e","Type":"ContainerStarted","Data":"c757c1edaca81d737ef9c45f787f863ac3c32347a92ce1446ac518a75fb6476d"} Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.084581 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.128851 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.160206 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.162378 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.662348114 +0000 UTC m=+164.552709217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.260920 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" podStartSLOduration=133.260877243 podStartE2EDuration="2m13.260877243s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.178069585 +0000 UTC m=+164.068430688" watchObservedRunningTime="2025-11-29 07:08:45.260877243 +0000 UTC m=+164.151238346" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.266043 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.266651 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.766632591 +0000 UTC m=+164.656993694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.281441 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wcj4m" podStartSLOduration=134.281411916 podStartE2EDuration="2m14.281411916s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.250251242 +0000 UTC m=+164.140612355" watchObservedRunningTime="2025-11-29 07:08:45.281411916 +0000 UTC m=+164.171773019" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.306666 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" podStartSLOduration=135.306642077 podStartE2EDuration="2m15.306642077s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.306244746 +0000 UTC m=+164.196605849" watchObservedRunningTime="2025-11-29 07:08:45.306642077 +0000 UTC m=+164.197003180" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.369631 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.370196 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.870172188 +0000 UTC m=+164.760533291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.427701 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm9bz" podStartSLOduration=133.427674563 podStartE2EDuration="2m13.427674563s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.36588193 +0000 UTC m=+164.256243033" watchObservedRunningTime="2025-11-29 07:08:45.427674563 +0000 UTC m=+164.318035666" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.473501 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.473986 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:45.973970862 +0000 UTC m=+164.864331965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.489909 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4t7gr" podStartSLOduration=133.489888868 podStartE2EDuration="2m13.489888868s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.433874523 +0000 UTC m=+164.324235626" watchObservedRunningTime="2025-11-29 07:08:45.489888868 +0000 UTC m=+164.380249971" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.492154 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t7s8k" podStartSLOduration=133.492141739 podStartE2EDuration="2m13.492141739s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.489084526 +0000 UTC m=+164.379445639" watchObservedRunningTime="2025-11-29 07:08:45.492141739 +0000 UTC m=+164.382502842" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.557281 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4zskc" podStartSLOduration=133.557260444 podStartE2EDuration="2m13.557260444s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.55602145 +0000 UTC m=+164.446382553" watchObservedRunningTime="2025-11-29 07:08:45.557260444 +0000 UTC m=+164.447621547" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.576427 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.576924 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.076902282 +0000 UTC m=+164.967263385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.641679 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d88tr" podStartSLOduration=133.641642375 podStartE2EDuration="2m13.641642375s" podCreationTimestamp="2025-11-29 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:45.625633607 +0000 UTC m=+164.515994730" watchObservedRunningTime="2025-11-29 07:08:45.641642375 +0000 UTC m=+164.532003478" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.678710 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.679197 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.179171653 +0000 UTC m=+165.069532766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.780202 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.780864 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.280838109 +0000 UTC m=+165.171199212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.849149 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:45 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:45 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:45 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.849220 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.884285 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.884836 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.384817948 +0000 UTC m=+165.275179061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.985222 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.985481 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.485434334 +0000 UTC m=+165.375795437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.985588 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:45 crc kubenswrapper[4731]: E1129 07:08:45.986002 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.485986159 +0000 UTC m=+165.376347262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.995119 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n9x6g"] Nov 29 07:08:45 crc kubenswrapper[4731]: I1129 07:08:45.996622 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.011781 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.087319 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.087757 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-catalog-content\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.087858 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-utilities\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.087888 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6lhf\" (UniqueName: \"kubernetes.io/projected/8519b0da-9e0e-4c34-98b0-cbcb4030af39-kube-api-access-s6lhf\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.088043 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.588017925 +0000 UTC m=+165.478379038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.095052 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" event={"ID":"b9c89890-1965-4fd0-875b-aed6485d9075","Type":"ContainerStarted","Data":"266d5f32ae835a1b451ec1021a7ccb7e934458842956aaad3b94ffded3237ee9"} Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.100900 4731 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2qgzh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.100981 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.112490 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n9x6g"] Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.165332 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kp4gj"] Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.166843 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.174369 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.189242 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-catalog-content\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.189843 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-utilities\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.189906 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6lhf\" (UniqueName: \"kubernetes.io/projected/8519b0da-9e0e-4c34-98b0-cbcb4030af39-kube-api-access-s6lhf\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.190032 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.193015 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-catalog-content\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.193439 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-utilities\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.193892 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.693876465 +0000 UTC m=+165.584237568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.263833 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6lhf\" (UniqueName: \"kubernetes.io/projected/8519b0da-9e0e-4c34-98b0-cbcb4030af39-kube-api-access-s6lhf\") pod \"community-operators-n9x6g\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.308234 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kp4gj"] Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.308294 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.308448 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.808422643 +0000 UTC m=+165.698783756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.308714 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-catalog-content\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.308799 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rffjk\" (UniqueName: \"kubernetes.io/projected/90d637c3-be0e-49b6-ac5a-5cb721948345-kube-api-access-rffjk\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.308872 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.308916 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-utilities\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.309409 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.80939014 +0000 UTC m=+165.699751253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.310982 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.379062 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c9bpb"] Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.380494 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.400009 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9bpb"] Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.414428 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.414589 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.914551031 +0000 UTC m=+165.804912134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.417223 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rffjk\" (UniqueName: \"kubernetes.io/projected/90d637c3-be0e-49b6-ac5a-5cb721948345-kube-api-access-rffjk\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.417729 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.417794 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-utilities\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.417868 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-catalog-content\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.418423 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-catalog-content\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.418863 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:46.918851229 +0000 UTC m=+165.809212332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.419066 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-utilities\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.458850 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rffjk\" (UniqueName: \"kubernetes.io/projected/90d637c3-be0e-49b6-ac5a-5cb721948345-kube-api-access-rffjk\") pod \"certified-operators-kp4gj\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.513306 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.519693 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.519981 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-catalog-content\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.520025 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-utilities\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.520139 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssxhl\" (UniqueName: \"kubernetes.io/projected/84b54257-ab5f-4f89-8ff2-5f725c4b8662-kube-api-access-ssxhl\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.520289 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.020272257 +0000 UTC m=+165.910633360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.580926 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stkcw"] Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.585974 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.600789 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stkcw"] Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.622640 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssxhl\" (UniqueName: \"kubernetes.io/projected/84b54257-ab5f-4f89-8ff2-5f725c4b8662-kube-api-access-ssxhl\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.622734 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-catalog-content\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.622752 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-utilities\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.622811 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.623293 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.123272518 +0000 UTC m=+166.013633621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.624431 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-catalog-content\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.624751 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-utilities\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.693597 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssxhl\" (UniqueName: \"kubernetes.io/projected/84b54257-ab5f-4f89-8ff2-5f725c4b8662-kube-api-access-ssxhl\") pod \"community-operators-c9bpb\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.728516 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.728903 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-utilities\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.729005 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-catalog-content\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.729044 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcwv\" (UniqueName: \"kubernetes.io/projected/7cde9a9c-1d79-4400-8830-69f304229886-kube-api-access-gtcwv\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.729218 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.22918862 +0000 UTC m=+166.119549723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.740081 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.832144 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.832803 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-utilities\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.832924 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-catalog-content\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.832949 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcwv\" (UniqueName: \"kubernetes.io/projected/7cde9a9c-1d79-4400-8830-69f304229886-kube-api-access-gtcwv\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.834245 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.334233438 +0000 UTC m=+166.224594541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.834836 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-utilities\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.835055 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-catalog-content\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.848977 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:46 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:46 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:46 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.849054 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.867729 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcwv\" (UniqueName: \"kubernetes.io/projected/7cde9a9c-1d79-4400-8830-69f304229886-kube-api-access-gtcwv\") pod \"certified-operators-stkcw\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.903527 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n9x6g"] Nov 29 07:08:46 crc kubenswrapper[4731]: I1129 07:08:46.934062 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:46 crc kubenswrapper[4731]: E1129 07:08:46.934627 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.434598598 +0000 UTC m=+166.324959701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.035947 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.036377 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.536359376 +0000 UTC m=+166.426720479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.070663 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kp4gj"] Nov 29 07:08:47 crc kubenswrapper[4731]: W1129 07:08:47.086778 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d637c3_be0e_49b6_ac5a_5cb721948345.slice/crio-ce40cec9604b5317a9ea9d1e11fafaf53b459383ef383d1d7f25180572c298bc WatchSource:0}: Error finding container ce40cec9604b5317a9ea9d1e11fafaf53b459383ef383d1d7f25180572c298bc: Status 404 returned error can't find the container with id ce40cec9604b5317a9ea9d1e11fafaf53b459383ef383d1d7f25180572c298bc Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.127723 4731 generic.go:334] "Generic (PLEG): container finished" podID="c47b7935-c3e7-4f98-b361-87ee3b481c3d" containerID="a4af8ae7a2f8e44ed74f30897484aae9bfb6907076b4bebcc5abca4e110996ce" exitCode=0 Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.128279 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" event={"ID":"c47b7935-c3e7-4f98-b361-87ee3b481c3d","Type":"ContainerDied","Data":"a4af8ae7a2f8e44ed74f30897484aae9bfb6907076b4bebcc5abca4e110996ce"} Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.129021 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.133620 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4gj" event={"ID":"90d637c3-be0e-49b6-ac5a-5cb721948345","Type":"ContainerStarted","Data":"ce40cec9604b5317a9ea9d1e11fafaf53b459383ef383d1d7f25180572c298bc"} Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.137294 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.137804 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.637772784 +0000 UTC m=+166.528133887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.177617 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" event={"ID":"b9c89890-1965-4fd0-875b-aed6485d9075","Type":"ContainerStarted","Data":"07f47589321a509f008009bcd4337b47d8381b68ec9b1b382d0f0b55e97f14ba"} Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.194015 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9x6g" event={"ID":"8519b0da-9e0e-4c34-98b0-cbcb4030af39","Type":"ContainerStarted","Data":"17cd2cd217c1b418cbbe98382ee84088059d93f7f64fa57cb9f47ec0a759eb67"} Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.240009 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.240527 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.740513709 +0000 UTC m=+166.630874802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.341241 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.341686 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.84165065 +0000 UTC m=+166.732011753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.341988 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.342471 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.842453712 +0000 UTC m=+166.732814805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.358503 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.358918 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.367121 4731 patch_prober.go:28] interesting pod/console-f9d7485db-htrhs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.367209 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-htrhs" podUID="55949699-24bb-4705-8bf0-db1dd651d387" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.443345 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.445448 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:47.945417213 +0000 UTC m=+166.835778486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.449371 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9bpb"] Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.499707 4731 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.545236 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.545649 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:48.045632628 +0000 UTC m=+166.935993731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.618954 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stkcw"] Nov 29 07:08:47 crc kubenswrapper[4731]: W1129 07:08:47.644788 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cde9a9c_1d79_4400_8830_69f304229886.slice/crio-b4b478c5197cc2b20dd6f24a356c906b6ef2a2a9cc77f00c57b7cb3174d923ce WatchSource:0}: Error finding container b4b478c5197cc2b20dd6f24a356c906b6ef2a2a9cc77f00c57b7cb3174d923ce: Status 404 returned error can't find the container with id b4b478c5197cc2b20dd6f24a356c906b6ef2a2a9cc77f00c57b7cb3174d923ce Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.647097 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.647906 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:48.147865929 +0000 UTC m=+167.038227032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.749762 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.750833 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:48.25081628 +0000 UTC m=+167.141177383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.844423 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:47 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:47 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:47 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.844870 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.851961 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.852528 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:08:48.352504376 +0000 UTC m=+167.242865479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.954188 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:47 crc kubenswrapper[4731]: E1129 07:08:47.954649 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:08:48.454631544 +0000 UTC m=+167.344992647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8nrfn" (UID: "cf2cdf59-237b-432e-9e41-c37078755275") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.985321 4731 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-29T07:08:47.499751821Z","Handler":null,"Name":""} Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.988319 4731 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 29 07:08:47 crc kubenswrapper[4731]: I1129 07:08:47.988382 4731 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.055531 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.060345 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.145719 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kjmcw"] Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.146942 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.149508 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.157889 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.162783 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjmcw"] Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.173044 4731 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.173097 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.198518 4731 generic.go:334] "Generic (PLEG): container finished" podID="7cde9a9c-1d79-4400-8830-69f304229886" containerID="653ae56570d00c98e604b38f4bdb404043ed72c23b370a470128b5c6da68617a" exitCode=0 Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.198609 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stkcw" event={"ID":"7cde9a9c-1d79-4400-8830-69f304229886","Type":"ContainerDied","Data":"653ae56570d00c98e604b38f4bdb404043ed72c23b370a470128b5c6da68617a"} Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.198655 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stkcw" event={"ID":"7cde9a9c-1d79-4400-8830-69f304229886","Type":"ContainerStarted","Data":"b4b478c5197cc2b20dd6f24a356c906b6ef2a2a9cc77f00c57b7cb3174d923ce"} Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.200816 4731 generic.go:334] "Generic (PLEG): container finished" podID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerID="88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7" exitCode=0 Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.200964 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9bpb" event={"ID":"84b54257-ab5f-4f89-8ff2-5f725c4b8662","Type":"ContainerDied","Data":"88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7"} Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.201001 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9bpb" event={"ID":"84b54257-ab5f-4f89-8ff2-5f725c4b8662","Type":"ContainerStarted","Data":"417e308722bf9d56f5e1722706af81db5ebab876736ba8407c3e6286681f05fd"} Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.201677 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.206248 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8nrfn\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.206432 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" event={"ID":"b9c89890-1965-4fd0-875b-aed6485d9075","Type":"ContainerStarted","Data":"080de9dc2c70ab0f2c47fe10c8f5732ea338964372be8d3bdafa637e41d40401"} Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.209540 4731 generic.go:334] "Generic (PLEG): container finished" podID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerID="bde0bb088bbcab741e708c7e351d9a810a98d6100252c0a9621511d8bd02e211" exitCode=0 Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.209637 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9x6g" event={"ID":"8519b0da-9e0e-4c34-98b0-cbcb4030af39","Type":"ContainerDied","Data":"bde0bb088bbcab741e708c7e351d9a810a98d6100252c0a9621511d8bd02e211"} Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.211555 4731 generic.go:334] "Generic (PLEG): container finished" podID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerID="394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa" exitCode=0 Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.214373 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4gj" event={"ID":"90d637c3-be0e-49b6-ac5a-5cb721948345","Type":"ContainerDied","Data":"394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa"} Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.260230 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-utilities\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.260338 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-catalog-content\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.260387 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9ptn\" (UniqueName: \"kubernetes.io/projected/041c9fb8-1657-4070-8649-0297bbba2df1-kube-api-access-h9ptn\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.296869 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.332244 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-nn8lp" podStartSLOduration=13.332220089 podStartE2EDuration="13.332220089s" podCreationTimestamp="2025-11-29 07:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:48.330884062 +0000 UTC m=+167.221245155" watchObservedRunningTime="2025-11-29 07:08:48.332220089 +0000 UTC m=+167.222581192" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.361991 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-utilities\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.362111 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-catalog-content\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.362149 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9ptn\" (UniqueName: \"kubernetes.io/projected/041c9fb8-1657-4070-8649-0297bbba2df1-kube-api-access-h9ptn\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.365200 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-utilities\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.366664 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-catalog-content\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.402726 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9ptn\" (UniqueName: \"kubernetes.io/projected/041c9fb8-1657-4070-8649-0297bbba2df1-kube-api-access-h9ptn\") pod \"redhat-marketplace-kjmcw\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.469822 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.563059 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n7tj2"] Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.564218 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.567399 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.583870 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7tj2"] Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.584295 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-c6kf9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.584323 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-c6kf9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.584378 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-c6kf9" podUID="4059535c-148b-4694-8c6f-ee8aae8ddc18" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.584378 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-c6kf9" podUID="4059535c-148b-4694-8c6f-ee8aae8ddc18" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.604335 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.604902 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.627018 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.669325 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47b7935-c3e7-4f98-b361-87ee3b481c3d-config-volume\") pod \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.669380 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-968qq\" (UniqueName: \"kubernetes.io/projected/c47b7935-c3e7-4f98-b361-87ee3b481c3d-kube-api-access-968qq\") pod \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.669486 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47b7935-c3e7-4f98-b361-87ee3b481c3d-secret-volume\") pod \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\" (UID: \"c47b7935-c3e7-4f98-b361-87ee3b481c3d\") " Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.669730 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-catalog-content\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.669772 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-utilities\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.669868 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjv7h\" (UniqueName: \"kubernetes.io/projected/ddd91825-ce67-48e7-8c8c-fcd73c025703-kube-api-access-sjv7h\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.671502 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47b7935-c3e7-4f98-b361-87ee3b481c3d-config-volume" (OuterVolumeSpecName: "config-volume") pod "c47b7935-c3e7-4f98-b361-87ee3b481c3d" (UID: "c47b7935-c3e7-4f98-b361-87ee3b481c3d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.690493 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b7935-c3e7-4f98-b361-87ee3b481c3d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c47b7935-c3e7-4f98-b361-87ee3b481c3d" (UID: "c47b7935-c3e7-4f98-b361-87ee3b481c3d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.692467 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47b7935-c3e7-4f98-b361-87ee3b481c3d-kube-api-access-968qq" (OuterVolumeSpecName: "kube-api-access-968qq") pod "c47b7935-c3e7-4f98-b361-87ee3b481c3d" (UID: "c47b7935-c3e7-4f98-b361-87ee3b481c3d"). InnerVolumeSpecName "kube-api-access-968qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.721390 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8nrfn"] Nov 29 07:08:48 crc kubenswrapper[4731]: W1129 07:08:48.729328 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2cdf59_237b_432e_9e41_c37078755275.slice/crio-8ed3b324caea16a1a377db600bac8964c59e33d3619e72944d2329524a2e2e6a WatchSource:0}: Error finding container 8ed3b324caea16a1a377db600bac8964c59e33d3619e72944d2329524a2e2e6a: Status 404 returned error can't find the container with id 8ed3b324caea16a1a377db600bac8964c59e33d3619e72944d2329524a2e2e6a Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.771002 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjv7h\" (UniqueName: \"kubernetes.io/projected/ddd91825-ce67-48e7-8c8c-fcd73c025703-kube-api-access-sjv7h\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.771133 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-catalog-content\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.771153 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-utilities\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.771210 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47b7935-c3e7-4f98-b361-87ee3b481c3d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.771223 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-968qq\" (UniqueName: \"kubernetes.io/projected/c47b7935-c3e7-4f98-b361-87ee3b481c3d-kube-api-access-968qq\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.771233 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47b7935-c3e7-4f98-b361-87ee3b481c3d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.771897 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-utilities\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.772235 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-catalog-content\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.792836 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjv7h\" (UniqueName: \"kubernetes.io/projected/ddd91825-ce67-48e7-8c8c-fcd73c025703-kube-api-access-sjv7h\") pod \"redhat-marketplace-n7tj2\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.819076 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjmcw"] Nov 29 07:08:48 crc kubenswrapper[4731]: W1129 07:08:48.829164 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod041c9fb8_1657_4070_8649_0297bbba2df1.slice/crio-d0ea6fe263abd203e8b0282069a059653877ba418ced82adb4d226504815da2a WatchSource:0}: Error finding container d0ea6fe263abd203e8b0282069a059653877ba418ced82adb4d226504815da2a: Status 404 returned error can't find the container with id d0ea6fe263abd203e8b0282069a059653877ba418ced82adb4d226504815da2a Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.837828 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.838652 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.843688 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:48 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:48 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:48 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.843732 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:48 crc kubenswrapper[4731]: I1129 07:08:48.937510 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.153621 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hv85m"] Nov 29 07:08:49 crc kubenswrapper[4731]: E1129 07:08:49.153923 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47b7935-c3e7-4f98-b361-87ee3b481c3d" containerName="collect-profiles" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.153937 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47b7935-c3e7-4f98-b361-87ee3b481c3d" containerName="collect-profiles" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.154067 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47b7935-c3e7-4f98-b361-87ee3b481c3d" containerName="collect-profiles" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.155020 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.160609 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.170286 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hv85m"] Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.225172 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" event={"ID":"cf2cdf59-237b-432e-9e41-c37078755275","Type":"ContainerStarted","Data":"ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47"} Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.225232 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" event={"ID":"cf2cdf59-237b-432e-9e41-c37078755275","Type":"ContainerStarted","Data":"8ed3b324caea16a1a377db600bac8964c59e33d3619e72944d2329524a2e2e6a"} Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.225395 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.228890 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.229080 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s" event={"ID":"c47b7935-c3e7-4f98-b361-87ee3b481c3d","Type":"ContainerDied","Data":"20d0d52e77404128409c3d217c1a6f884068e85d83ed36a887e39de2dba5188e"} Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.229113 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d0d52e77404128409c3d217c1a6f884068e85d83ed36a887e39de2dba5188e" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.234805 4731 generic.go:334] "Generic (PLEG): container finished" podID="041c9fb8-1657-4070-8649-0297bbba2df1" containerID="b6befeb92d8bfe154c60218e52ce36ff04238cee434b33e9c3ad3d5875d9d87c" exitCode=0 Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.236373 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjmcw" event={"ID":"041c9fb8-1657-4070-8649-0297bbba2df1","Type":"ContainerDied","Data":"b6befeb92d8bfe154c60218e52ce36ff04238cee434b33e9c3ad3d5875d9d87c"} Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.236456 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjmcw" event={"ID":"041c9fb8-1657-4070-8649-0297bbba2df1","Type":"ContainerStarted","Data":"d0ea6fe263abd203e8b0282069a059653877ba418ced82adb4d226504815da2a"} Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.248321 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-m7s4c" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.256521 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" podStartSLOduration=138.256487961 podStartE2EDuration="2m18.256487961s" podCreationTimestamp="2025-11-29 07:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:49.248934814 +0000 UTC m=+168.139295927" watchObservedRunningTime="2025-11-29 07:08:49.256487961 +0000 UTC m=+168.146849074" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.282865 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55d2p\" (UniqueName: \"kubernetes.io/projected/d246bdda-5a16-4924-a12a-b29095474226-kube-api-access-55d2p\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.282951 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-catalog-content\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.282976 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-utilities\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.343896 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7tj2"] Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.391669 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55d2p\" (UniqueName: \"kubernetes.io/projected/d246bdda-5a16-4924-a12a-b29095474226-kube-api-access-55d2p\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.391787 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-catalog-content\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.391839 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-utilities\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.396129 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-catalog-content\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.398258 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-utilities\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.438778 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55d2p\" (UniqueName: \"kubernetes.io/projected/d246bdda-5a16-4924-a12a-b29095474226-kube-api-access-55d2p\") pod \"redhat-operators-hv85m\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.501065 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.560417 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gv68n"] Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.562387 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.591261 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gv68n"] Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.699996 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-utilities\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.700042 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzkzg\" (UniqueName: \"kubernetes.io/projected/ee11152f-267c-4a04-bd4b-84eec0eff00e-kube-api-access-tzkzg\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.700097 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-catalog-content\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.801727 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-utilities\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.801797 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzkzg\" (UniqueName: \"kubernetes.io/projected/ee11152f-267c-4a04-bd4b-84eec0eff00e-kube-api-access-tzkzg\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.801880 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-catalog-content\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.803094 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-catalog-content\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.803195 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-utilities\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.829612 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.838486 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzkzg\" (UniqueName: \"kubernetes.io/projected/ee11152f-267c-4a04-bd4b-84eec0eff00e-kube-api-access-tzkzg\") pod \"redhat-operators-gv68n\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.851445 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:08:49 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 29 07:08:49 crc kubenswrapper[4731]: [+]process-running ok Nov 29 07:08:49 crc kubenswrapper[4731]: healthz check failed Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.851837 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:08:49 crc kubenswrapper[4731]: I1129 07:08:49.990135 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hv85m"] Nov 29 07:08:50 crc kubenswrapper[4731]: W1129 07:08:50.007822 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd246bdda_5a16_4924_a12a_b29095474226.slice/crio-37aa836b64340e7d882b5ae2904650a27c2f8ce45ebd2517781e6a4d84df2ee8 WatchSource:0}: Error finding container 37aa836b64340e7d882b5ae2904650a27c2f8ce45ebd2517781e6a4d84df2ee8: Status 404 returned error can't find the container with id 37aa836b64340e7d882b5ae2904650a27c2f8ce45ebd2517781e6a4d84df2ee8 Nov 29 07:08:50 crc kubenswrapper[4731]: I1129 07:08:50.055749 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:08:50 crc kubenswrapper[4731]: I1129 07:08:50.255158 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv85m" event={"ID":"d246bdda-5a16-4924-a12a-b29095474226","Type":"ContainerStarted","Data":"37aa836b64340e7d882b5ae2904650a27c2f8ce45ebd2517781e6a4d84df2ee8"} Nov 29 07:08:50 crc kubenswrapper[4731]: I1129 07:08:50.257818 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7tj2" event={"ID":"ddd91825-ce67-48e7-8c8c-fcd73c025703","Type":"ContainerStarted","Data":"05203ca9aa52f81e291f062a1c7e8a15c2c37ee4d740d73bbf86feffcdd278c8"} Nov 29 07:08:50 crc kubenswrapper[4731]: I1129 07:08:50.397206 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gv68n"] Nov 29 07:08:50 crc kubenswrapper[4731]: I1129 07:08:50.856799 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:50 crc kubenswrapper[4731]: I1129 07:08:50.866947 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2qd7z" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.274230 4731 generic.go:334] "Generic (PLEG): container finished" podID="d246bdda-5a16-4924-a12a-b29095474226" containerID="cf0ecb7c2a9237b7793c8279bf5736aba5902ecc0fdcedea0f632ef211c09820" exitCode=0 Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.275081 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv85m" event={"ID":"d246bdda-5a16-4924-a12a-b29095474226","Type":"ContainerDied","Data":"cf0ecb7c2a9237b7793c8279bf5736aba5902ecc0fdcedea0f632ef211c09820"} Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.296132 4731 generic.go:334] "Generic (PLEG): container finished" podID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerID="40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64" exitCode=0 Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.296216 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7tj2" event={"ID":"ddd91825-ce67-48e7-8c8c-fcd73c025703","Type":"ContainerDied","Data":"40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64"} Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.362464 4731 generic.go:334] "Generic (PLEG): container finished" podID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerID="5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed" exitCode=0 Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.362936 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv68n" event={"ID":"ee11152f-267c-4a04-bd4b-84eec0eff00e","Type":"ContainerDied","Data":"5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed"} Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.363933 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv68n" event={"ID":"ee11152f-267c-4a04-bd4b-84eec0eff00e","Type":"ContainerStarted","Data":"2a30e558ac8dcade4a2950433ebbf28dc1df2aafe97baffc7afeb3faf9eb7426"} Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.539652 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.540472 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.543269 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.555125 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.565240 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55766ee0-68bc-407e-852a-60f53c599a99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55766ee0-68bc-407e-852a-60f53c599a99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.565297 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55766ee0-68bc-407e-852a-60f53c599a99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55766ee0-68bc-407e-852a-60f53c599a99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.566864 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.670584 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55766ee0-68bc-407e-852a-60f53c599a99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55766ee0-68bc-407e-852a-60f53c599a99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.670718 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55766ee0-68bc-407e-852a-60f53c599a99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55766ee0-68bc-407e-852a-60f53c599a99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.670760 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55766ee0-68bc-407e-852a-60f53c599a99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55766ee0-68bc-407e-852a-60f53c599a99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.703049 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55766ee0-68bc-407e-852a-60f53c599a99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55766ee0-68bc-407e-852a-60f53c599a99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.871376 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.872855 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.877468 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.877739 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.888876 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.897666 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.978052 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59184ef-6eb3-46f1-8e39-1abc9f684774-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b59184ef-6eb3-46f1-8e39-1abc9f684774\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:08:51 crc kubenswrapper[4731]: I1129 07:08:51.978203 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59184ef-6eb3-46f1-8e39-1abc9f684774-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b59184ef-6eb3-46f1-8e39-1abc9f684774\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:08:52 crc kubenswrapper[4731]: I1129 07:08:52.080247 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59184ef-6eb3-46f1-8e39-1abc9f684774-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b59184ef-6eb3-46f1-8e39-1abc9f684774\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:08:52 crc kubenswrapper[4731]: I1129 07:08:52.080292 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59184ef-6eb3-46f1-8e39-1abc9f684774-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b59184ef-6eb3-46f1-8e39-1abc9f684774\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:08:52 crc kubenswrapper[4731]: I1129 07:08:52.080418 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59184ef-6eb3-46f1-8e39-1abc9f684774-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b59184ef-6eb3-46f1-8e39-1abc9f684774\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:08:52 crc kubenswrapper[4731]: I1129 07:08:52.103979 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59184ef-6eb3-46f1-8e39-1abc9f684774-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b59184ef-6eb3-46f1-8e39-1abc9f684774\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:08:52 crc kubenswrapper[4731]: I1129 07:08:52.200519 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:08:52 crc kubenswrapper[4731]: I1129 07:08:52.315278 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:08:52 crc kubenswrapper[4731]: W1129 07:08:52.345996 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod55766ee0_68bc_407e_852a_60f53c599a99.slice/crio-6480171ff5872429d124bda76f6d8c6aee46e216f1441344763ff9ff7168901e WatchSource:0}: Error finding container 6480171ff5872429d124bda76f6d8c6aee46e216f1441344763ff9ff7168901e: Status 404 returned error can't find the container with id 6480171ff5872429d124bda76f6d8c6aee46e216f1441344763ff9ff7168901e Nov 29 07:08:52 crc kubenswrapper[4731]: I1129 07:08:52.393736 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55766ee0-68bc-407e-852a-60f53c599a99","Type":"ContainerStarted","Data":"6480171ff5872429d124bda76f6d8c6aee46e216f1441344763ff9ff7168901e"} Nov 29 07:08:52 crc kubenswrapper[4731]: I1129 07:08:52.951900 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:08:53 crc kubenswrapper[4731]: I1129 07:08:53.395285 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-cn2xp" Nov 29 07:08:53 crc kubenswrapper[4731]: I1129 07:08:53.415452 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b59184ef-6eb3-46f1-8e39-1abc9f684774","Type":"ContainerStarted","Data":"072a4c2b791ba0a44911dbb83c69c45e41a7006b834c06d53d4979f96f098fe2"} Nov 29 07:08:54 crc kubenswrapper[4731]: I1129 07:08:54.445956 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b59184ef-6eb3-46f1-8e39-1abc9f684774","Type":"ContainerStarted","Data":"5af3c8a9b107cf4764e6f1aaabf54c8a058e6796750da6d160b3018b25a9cb5a"} Nov 29 07:08:54 crc kubenswrapper[4731]: I1129 07:08:54.459488 4731 generic.go:334] "Generic (PLEG): container finished" podID="55766ee0-68bc-407e-852a-60f53c599a99" containerID="7bcd7c6f0c79debca0463570d93f47a9a1bfc17ef954da8cd108a94c60dcc018" exitCode=0 Nov 29 07:08:54 crc kubenswrapper[4731]: I1129 07:08:54.459538 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55766ee0-68bc-407e-852a-60f53c599a99","Type":"ContainerDied","Data":"7bcd7c6f0c79debca0463570d93f47a9a1bfc17ef954da8cd108a94c60dcc018"} Nov 29 07:08:54 crc kubenswrapper[4731]: I1129 07:08:54.491131 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.491103433 podStartE2EDuration="3.491103433s" podCreationTimestamp="2025-11-29 07:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:08:54.465899173 +0000 UTC m=+173.356260286" watchObservedRunningTime="2025-11-29 07:08:54.491103433 +0000 UTC m=+173.381464536" Nov 29 07:08:55 crc kubenswrapper[4731]: I1129 07:08:55.486309 4731 generic.go:334] "Generic (PLEG): container finished" podID="b59184ef-6eb3-46f1-8e39-1abc9f684774" containerID="5af3c8a9b107cf4764e6f1aaabf54c8a058e6796750da6d160b3018b25a9cb5a" exitCode=0 Nov 29 07:08:55 crc kubenswrapper[4731]: I1129 07:08:55.486470 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b59184ef-6eb3-46f1-8e39-1abc9f684774","Type":"ContainerDied","Data":"5af3c8a9b107cf4764e6f1aaabf54c8a058e6796750da6d160b3018b25a9cb5a"} Nov 29 07:08:55 crc kubenswrapper[4731]: I1129 07:08:55.827079 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:55 crc kubenswrapper[4731]: I1129 07:08:55.863973 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55766ee0-68bc-407e-852a-60f53c599a99-kubelet-dir\") pod \"55766ee0-68bc-407e-852a-60f53c599a99\" (UID: \"55766ee0-68bc-407e-852a-60f53c599a99\") " Nov 29 07:08:55 crc kubenswrapper[4731]: I1129 07:08:55.864116 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55766ee0-68bc-407e-852a-60f53c599a99-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "55766ee0-68bc-407e-852a-60f53c599a99" (UID: "55766ee0-68bc-407e-852a-60f53c599a99"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:08:55 crc kubenswrapper[4731]: I1129 07:08:55.864413 4731 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55766ee0-68bc-407e-852a-60f53c599a99-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:55 crc kubenswrapper[4731]: I1129 07:08:55.965441 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55766ee0-68bc-407e-852a-60f53c599a99-kube-api-access\") pod \"55766ee0-68bc-407e-852a-60f53c599a99\" (UID: \"55766ee0-68bc-407e-852a-60f53c599a99\") " Nov 29 07:08:55 crc kubenswrapper[4731]: I1129 07:08:55.993426 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55766ee0-68bc-407e-852a-60f53c599a99-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "55766ee0-68bc-407e-852a-60f53c599a99" (UID: "55766ee0-68bc-407e-852a-60f53c599a99"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:08:56 crc kubenswrapper[4731]: I1129 07:08:56.067914 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55766ee0-68bc-407e-852a-60f53c599a99-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:56 crc kubenswrapper[4731]: I1129 07:08:56.498715 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55766ee0-68bc-407e-852a-60f53c599a99","Type":"ContainerDied","Data":"6480171ff5872429d124bda76f6d8c6aee46e216f1441344763ff9ff7168901e"} Nov 29 07:08:56 crc kubenswrapper[4731]: I1129 07:08:56.498765 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6480171ff5872429d124bda76f6d8c6aee46e216f1441344763ff9ff7168901e" Nov 29 07:08:56 crc kubenswrapper[4731]: I1129 07:08:56.498851 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:08:56 crc kubenswrapper[4731]: I1129 07:08:56.778363 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:56 crc kubenswrapper[4731]: I1129 07:08:56.800060 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/944440c1-51b2-4c49-b5fd-4c024fc33ace-metrics-certs\") pod \"network-metrics-daemon-2pp9l\" (UID: \"944440c1-51b2-4c49-b5fd-4c024fc33ace\") " pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:56 crc kubenswrapper[4731]: I1129 07:08:56.824160 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2pp9l" Nov 29 07:08:57 crc kubenswrapper[4731]: I1129 07:08:57.358514 4731 patch_prober.go:28] interesting pod/console-f9d7485db-htrhs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 29 07:08:57 crc kubenswrapper[4731]: I1129 07:08:57.358592 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-htrhs" podUID="55949699-24bb-4705-8bf0-db1dd651d387" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 29 07:08:58 crc kubenswrapper[4731]: I1129 07:08:58.593476 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-c6kf9" Nov 29 07:09:03 crc kubenswrapper[4731]: I1129 07:09:03.002411 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:09:03 crc kubenswrapper[4731]: I1129 07:09:03.003509 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:09:07 crc kubenswrapper[4731]: I1129 07:09:07.362797 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:09:07 crc kubenswrapper[4731]: I1129 07:09:07.367877 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:09:08 crc kubenswrapper[4731]: I1129 07:09:08.304136 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.241448 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.402643 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59184ef-6eb3-46f1-8e39-1abc9f684774-kube-api-access\") pod \"b59184ef-6eb3-46f1-8e39-1abc9f684774\" (UID: \"b59184ef-6eb3-46f1-8e39-1abc9f684774\") " Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.402852 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59184ef-6eb3-46f1-8e39-1abc9f684774-kubelet-dir\") pod \"b59184ef-6eb3-46f1-8e39-1abc9f684774\" (UID: \"b59184ef-6eb3-46f1-8e39-1abc9f684774\") " Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.403055 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59184ef-6eb3-46f1-8e39-1abc9f684774-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b59184ef-6eb3-46f1-8e39-1abc9f684774" (UID: "b59184ef-6eb3-46f1-8e39-1abc9f684774"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.403316 4731 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59184ef-6eb3-46f1-8e39-1abc9f684774-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.411410 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59184ef-6eb3-46f1-8e39-1abc9f684774-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b59184ef-6eb3-46f1-8e39-1abc9f684774" (UID: "b59184ef-6eb3-46f1-8e39-1abc9f684774"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.504426 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59184ef-6eb3-46f1-8e39-1abc9f684774-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.590616 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b59184ef-6eb3-46f1-8e39-1abc9f684774","Type":"ContainerDied","Data":"072a4c2b791ba0a44911dbb83c69c45e41a7006b834c06d53d4979f96f098fe2"} Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.590673 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="072a4c2b791ba0a44911dbb83c69c45e41a7006b834c06d53d4979f96f098fe2" Nov 29 07:09:09 crc kubenswrapper[4731]: I1129 07:09:09.590669 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:09:14 crc kubenswrapper[4731]: I1129 07:09:14.846079 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:09:19 crc kubenswrapper[4731]: I1129 07:09:19.161918 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dzwrr" Nov 29 07:09:24 crc kubenswrapper[4731]: E1129 07:09:24.947381 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 07:09:24 crc kubenswrapper[4731]: E1129 07:09:24.948329 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6lhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-n9x6g_openshift-marketplace(8519b0da-9e0e-4c34-98b0-cbcb4030af39): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:09:24 crc kubenswrapper[4731]: E1129 07:09:24.950739 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-n9x6g" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" Nov 29 07:09:24 crc kubenswrapper[4731]: E1129 07:09:24.987618 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 07:09:24 crc kubenswrapper[4731]: E1129 07:09:24.987875 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9ptn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kjmcw_openshift-marketplace(041c9fb8-1657-4070-8649-0297bbba2df1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:09:24 crc kubenswrapper[4731]: E1129 07:09:24.989182 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kjmcw" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.140314 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:09:25 crc kubenswrapper[4731]: E1129 07:09:25.141109 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59184ef-6eb3-46f1-8e39-1abc9f684774" containerName="pruner" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.141130 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59184ef-6eb3-46f1-8e39-1abc9f684774" containerName="pruner" Nov 29 07:09:25 crc kubenswrapper[4731]: E1129 07:09:25.141139 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55766ee0-68bc-407e-852a-60f53c599a99" containerName="pruner" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.141145 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="55766ee0-68bc-407e-852a-60f53c599a99" containerName="pruner" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.141255 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="55766ee0-68bc-407e-852a-60f53c599a99" containerName="pruner" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.141269 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59184ef-6eb3-46f1-8e39-1abc9f684774" containerName="pruner" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.141733 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.144647 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.145875 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.149324 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.249819 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.249935 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.353033 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.353205 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.353316 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.383845 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:25 crc kubenswrapper[4731]: I1129 07:09:25.483743 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:28 crc kubenswrapper[4731]: E1129 07:09:28.296037 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-n9x6g" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" Nov 29 07:09:28 crc kubenswrapper[4731]: E1129 07:09:28.296159 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kjmcw" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" Nov 29 07:09:28 crc kubenswrapper[4731]: E1129 07:09:28.398004 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 07:09:28 crc kubenswrapper[4731]: E1129 07:09:28.398205 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55d2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hv85m_openshift-marketplace(d246bdda-5a16-4924-a12a-b29095474226): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:09:28 crc kubenswrapper[4731]: E1129 07:09:28.402012 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 07:09:28 crc kubenswrapper[4731]: E1129 07:09:28.402124 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hv85m" podUID="d246bdda-5a16-4924-a12a-b29095474226" Nov 29 07:09:28 crc kubenswrapper[4731]: E1129 07:09:28.402355 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzkzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gv68n_openshift-marketplace(ee11152f-267c-4a04-bd4b-84eec0eff00e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:09:28 crc kubenswrapper[4731]: E1129 07:09:28.403795 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gv68n" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.345722 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.352244 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.368816 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.516648 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.516731 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80df65af-cffa-42d9-b609-7e90950979e2-kube-api-access\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.516760 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-var-lock\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.618549 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.618649 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.618744 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80df65af-cffa-42d9-b609-7e90950979e2-kube-api-access\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.619158 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-var-lock\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.619800 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-var-lock\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.646980 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80df65af-cffa-42d9-b609-7e90950979e2-kube-api-access\") pod \"installer-9-crc\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:29 crc kubenswrapper[4731]: I1129 07:09:29.694819 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:09:31 crc kubenswrapper[4731]: E1129 07:09:31.449614 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hv85m" podUID="d246bdda-5a16-4924-a12a-b29095474226" Nov 29 07:09:31 crc kubenswrapper[4731]: E1129 07:09:31.450031 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gv68n" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" Nov 29 07:09:31 crc kubenswrapper[4731]: I1129 07:09:31.940263 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.061232 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2pp9l"] Nov 29 07:09:32 crc kubenswrapper[4731]: W1129 07:09:32.065981 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod944440c1_51b2_4c49_b5fd_4c024fc33ace.slice/crio-83d4c9105162712531d8985df5b72780f46140b8d9e3750dbe80c9f62ec8c909 WatchSource:0}: Error finding container 83d4c9105162712531d8985df5b72780f46140b8d9e3750dbe80c9f62ec8c909: Status 404 returned error can't find the container with id 83d4c9105162712531d8985df5b72780f46140b8d9e3750dbe80c9f62ec8c909 Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.066667 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:09:32 crc kubenswrapper[4731]: W1129 07:09:32.086820 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8ee8c954_2d17_4f01_9588_2849b4bb7bf0.slice/crio-043cd4cd11eed0bebed6d2d069942db7ca9c5ab02e9f41f38e4fcde378f33d2f WatchSource:0}: Error finding container 043cd4cd11eed0bebed6d2d069942db7ca9c5ab02e9f41f38e4fcde378f33d2f: Status 404 returned error can't find the container with id 043cd4cd11eed0bebed6d2d069942db7ca9c5ab02e9f41f38e4fcde378f33d2f Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.388019 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.388241 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rffjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kp4gj_openshift-marketplace(90d637c3-be0e-49b6-ac5a-5cb721948345): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.389529 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kp4gj" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.522188 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.522925 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtcwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-stkcw_openshift-marketplace(7cde9a9c-1d79-4400-8830-69f304229886): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.524246 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-stkcw" podUID="7cde9a9c-1d79-4400-8830-69f304229886" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.570874 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.571100 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssxhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c9bpb_openshift-marketplace(84b54257-ab5f-4f89-8ff2-5f725c4b8662): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.572810 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-c9bpb" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.720407 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8ee8c954-2d17-4f01-9588-2849b4bb7bf0","Type":"ContainerStarted","Data":"480d680ddcba6819c96b21c2c4417ca9a916f96a4b043c56ee88ff950a6b0277"} Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.720474 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8ee8c954-2d17-4f01-9588-2849b4bb7bf0","Type":"ContainerStarted","Data":"043cd4cd11eed0bebed6d2d069942db7ca9c5ab02e9f41f38e4fcde378f33d2f"} Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.724790 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" event={"ID":"944440c1-51b2-4c49-b5fd-4c024fc33ace","Type":"ContainerStarted","Data":"454c12af7bbbe521b9d5dced16f027851aec86c734d7b80d9f8cb57222532ec5"} Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.724871 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" event={"ID":"944440c1-51b2-4c49-b5fd-4c024fc33ace","Type":"ContainerStarted","Data":"3a7c58317b5804c27beb6fd6f65b05ace07b39f8224e1b95f4d5a9dc0466e3a8"} Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.724887 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2pp9l" event={"ID":"944440c1-51b2-4c49-b5fd-4c024fc33ace","Type":"ContainerStarted","Data":"83d4c9105162712531d8985df5b72780f46140b8d9e3750dbe80c9f62ec8c909"} Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.731064 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"80df65af-cffa-42d9-b609-7e90950979e2","Type":"ContainerStarted","Data":"518bec76116451f6aeb719dfd9a574a594bd5fbfe1a946e3347530166803951f"} Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.731130 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"80df65af-cffa-42d9-b609-7e90950979e2","Type":"ContainerStarted","Data":"b89add525edcbe3587451e63dd0a534f55418494f7a0ab1cc6f93f869bc97f95"} Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.733769 4731 generic.go:334] "Generic (PLEG): container finished" podID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerID="bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0" exitCode=0 Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.733890 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7tj2" event={"ID":"ddd91825-ce67-48e7-8c8c-fcd73c025703","Type":"ContainerDied","Data":"bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0"} Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.737616 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kp4gj" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.737663 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c9bpb" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" Nov 29 07:09:32 crc kubenswrapper[4731]: E1129 07:09:32.737916 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-stkcw" podUID="7cde9a9c-1d79-4400-8830-69f304229886" Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.749303 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=7.749282998 podStartE2EDuration="7.749282998s" podCreationTimestamp="2025-11-29 07:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:09:32.745249008 +0000 UTC m=+211.635610111" watchObservedRunningTime="2025-11-29 07:09:32.749282998 +0000 UTC m=+211.639644101" Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.857377 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.857352069 podStartE2EDuration="3.857352069s" podCreationTimestamp="2025-11-29 07:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:09:32.856596669 +0000 UTC m=+211.746957772" watchObservedRunningTime="2025-11-29 07:09:32.857352069 +0000 UTC m=+211.747713172" Nov 29 07:09:32 crc kubenswrapper[4731]: I1129 07:09:32.881277 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2pp9l" podStartSLOduration=182.881252074 podStartE2EDuration="3m2.881252074s" podCreationTimestamp="2025-11-29 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:09:32.876850993 +0000 UTC m=+211.767212106" watchObservedRunningTime="2025-11-29 07:09:32.881252074 +0000 UTC m=+211.771613177" Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.002816 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.003390 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.003457 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.004518 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.004687 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c" gracePeriod=600 Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.745955 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7tj2" event={"ID":"ddd91825-ce67-48e7-8c8c-fcd73c025703","Type":"ContainerStarted","Data":"fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3"} Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.749355 4731 generic.go:334] "Generic (PLEG): container finished" podID="8ee8c954-2d17-4f01-9588-2849b4bb7bf0" containerID="480d680ddcba6819c96b21c2c4417ca9a916f96a4b043c56ee88ff950a6b0277" exitCode=0 Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.749431 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8ee8c954-2d17-4f01-9588-2849b4bb7bf0","Type":"ContainerDied","Data":"480d680ddcba6819c96b21c2c4417ca9a916f96a4b043c56ee88ff950a6b0277"} Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.752360 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c" exitCode=0 Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.753216 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c"} Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.753251 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"ca99db39a60fe421bcd1cc3436c5d0f329f6d5a18c512d839a8790b1dc8cf430"} Nov 29 07:09:33 crc kubenswrapper[4731]: I1129 07:09:33.771757 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n7tj2" podStartSLOduration=3.892948082 podStartE2EDuration="45.77173087s" podCreationTimestamp="2025-11-29 07:08:48 +0000 UTC" firstStartedPulling="2025-11-29 07:08:51.338268065 +0000 UTC m=+170.228629168" lastFinishedPulling="2025-11-29 07:09:33.217050853 +0000 UTC m=+212.107411956" observedRunningTime="2025-11-29 07:09:33.767336409 +0000 UTC m=+212.657697532" watchObservedRunningTime="2025-11-29 07:09:33.77173087 +0000 UTC m=+212.662091973" Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.015134 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.101822 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kube-api-access\") pod \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\" (UID: \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\") " Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.101930 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kubelet-dir\") pod \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\" (UID: \"8ee8c954-2d17-4f01-9588-2849b4bb7bf0\") " Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.102145 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8ee8c954-2d17-4f01-9588-2849b4bb7bf0" (UID: "8ee8c954-2d17-4f01-9588-2849b4bb7bf0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.102378 4731 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.111946 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8ee8c954-2d17-4f01-9588-2849b4bb7bf0" (UID: "8ee8c954-2d17-4f01-9588-2849b4bb7bf0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.203262 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ee8c954-2d17-4f01-9588-2849b4bb7bf0-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.766183 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8ee8c954-2d17-4f01-9588-2849b4bb7bf0","Type":"ContainerDied","Data":"043cd4cd11eed0bebed6d2d069942db7ca9c5ab02e9f41f38e4fcde378f33d2f"} Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.766250 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043cd4cd11eed0bebed6d2d069942db7ca9c5ab02e9f41f38e4fcde378f33d2f" Nov 29 07:09:35 crc kubenswrapper[4731]: I1129 07:09:35.766318 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:09:38 crc kubenswrapper[4731]: I1129 07:09:38.938360 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:09:38 crc kubenswrapper[4731]: I1129 07:09:38.942752 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:09:39 crc kubenswrapper[4731]: I1129 07:09:39.021031 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:09:39 crc kubenswrapper[4731]: I1129 07:09:39.843663 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:09:39 crc kubenswrapper[4731]: I1129 07:09:39.895036 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7tj2"] Nov 29 07:09:41 crc kubenswrapper[4731]: I1129 07:09:41.805643 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n7tj2" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerName="registry-server" containerID="cri-o://fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3" gracePeriod=2 Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.763163 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.820718 4731 generic.go:334] "Generic (PLEG): container finished" podID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerID="fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3" exitCode=0 Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.820810 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7tj2" event={"ID":"ddd91825-ce67-48e7-8c8c-fcd73c025703","Type":"ContainerDied","Data":"fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3"} Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.820860 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7tj2" event={"ID":"ddd91825-ce67-48e7-8c8c-fcd73c025703","Type":"ContainerDied","Data":"05203ca9aa52f81e291f062a1c7e8a15c2c37ee4d740d73bbf86feffcdd278c8"} Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.820869 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7tj2" Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.820888 4731 scope.go:117] "RemoveContainer" containerID="fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3" Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.842449 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-catalog-content\") pod \"ddd91825-ce67-48e7-8c8c-fcd73c025703\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.842688 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-utilities\") pod \"ddd91825-ce67-48e7-8c8c-fcd73c025703\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.842955 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjv7h\" (UniqueName: \"kubernetes.io/projected/ddd91825-ce67-48e7-8c8c-fcd73c025703-kube-api-access-sjv7h\") pod \"ddd91825-ce67-48e7-8c8c-fcd73c025703\" (UID: \"ddd91825-ce67-48e7-8c8c-fcd73c025703\") " Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.844193 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-utilities" (OuterVolumeSpecName: "utilities") pod "ddd91825-ce67-48e7-8c8c-fcd73c025703" (UID: "ddd91825-ce67-48e7-8c8c-fcd73c025703"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.851436 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd91825-ce67-48e7-8c8c-fcd73c025703-kube-api-access-sjv7h" (OuterVolumeSpecName: "kube-api-access-sjv7h") pod "ddd91825-ce67-48e7-8c8c-fcd73c025703" (UID: "ddd91825-ce67-48e7-8c8c-fcd73c025703"). InnerVolumeSpecName "kube-api-access-sjv7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.888337 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddd91825-ce67-48e7-8c8c-fcd73c025703" (UID: "ddd91825-ce67-48e7-8c8c-fcd73c025703"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.945294 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.945371 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd91825-ce67-48e7-8c8c-fcd73c025703-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:42 crc kubenswrapper[4731]: I1129 07:09:42.945386 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjv7h\" (UniqueName: \"kubernetes.io/projected/ddd91825-ce67-48e7-8c8c-fcd73c025703-kube-api-access-sjv7h\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.175053 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7tj2"] Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.181118 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7tj2"] Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.326558 4731 scope.go:117] "RemoveContainer" containerID="bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.366248 4731 scope.go:117] "RemoveContainer" containerID="40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.413760 4731 scope.go:117] "RemoveContainer" containerID="fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3" Nov 29 07:09:43 crc kubenswrapper[4731]: E1129 07:09:43.414877 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3\": container with ID starting with fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3 not found: ID does not exist" containerID="fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.414957 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3"} err="failed to get container status \"fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3\": rpc error: code = NotFound desc = could not find container \"fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3\": container with ID starting with fd49ed76e193b5c0bf07655284ff51e0015db6a189f26fab7bc6f7eb25d45bb3 not found: ID does not exist" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.415005 4731 scope.go:117] "RemoveContainer" containerID="bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0" Nov 29 07:09:43 crc kubenswrapper[4731]: E1129 07:09:43.415503 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0\": container with ID starting with bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0 not found: ID does not exist" containerID="bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.415547 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0"} err="failed to get container status \"bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0\": rpc error: code = NotFound desc = could not find container \"bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0\": container with ID starting with bd65701ce7cc295597f291658af75f489d893f268514da72858de9ec42e85de0 not found: ID does not exist" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.415643 4731 scope.go:117] "RemoveContainer" containerID="40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64" Nov 29 07:09:43 crc kubenswrapper[4731]: E1129 07:09:43.416056 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64\": container with ID starting with 40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64 not found: ID does not exist" containerID="40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.416106 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64"} err="failed to get container status \"40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64\": rpc error: code = NotFound desc = could not find container \"40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64\": container with ID starting with 40e4d1e244b43ff6937566bd60e9bb97c1b549c2f1ccb5c1cf970a0b167bfc64 not found: ID does not exist" Nov 29 07:09:43 crc kubenswrapper[4731]: I1129 07:09:43.816426 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" path="/var/lib/kubelet/pods/ddd91825-ce67-48e7-8c8c-fcd73c025703/volumes" Nov 29 07:09:44 crc kubenswrapper[4731]: I1129 07:09:44.885676 4731 generic.go:334] "Generic (PLEG): container finished" podID="041c9fb8-1657-4070-8649-0297bbba2df1" containerID="8bd59953abb16ec6ff20209c5b172eecb56a7073cf4b6345095a52e4da174e05" exitCode=0 Nov 29 07:09:44 crc kubenswrapper[4731]: I1129 07:09:44.885718 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjmcw" event={"ID":"041c9fb8-1657-4070-8649-0297bbba2df1","Type":"ContainerDied","Data":"8bd59953abb16ec6ff20209c5b172eecb56a7073cf4b6345095a52e4da174e05"} Nov 29 07:09:44 crc kubenswrapper[4731]: I1129 07:09:44.900409 4731 generic.go:334] "Generic (PLEG): container finished" podID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerID="6f5f02fa7a2b78a76693c9824adcb59ccf97616abd6688bf19b8df33cf7ced53" exitCode=0 Nov 29 07:09:44 crc kubenswrapper[4731]: I1129 07:09:44.900477 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9x6g" event={"ID":"8519b0da-9e0e-4c34-98b0-cbcb4030af39","Type":"ContainerDied","Data":"6f5f02fa7a2b78a76693c9824adcb59ccf97616abd6688bf19b8df33cf7ced53"} Nov 29 07:09:46 crc kubenswrapper[4731]: I1129 07:09:46.923109 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stkcw" event={"ID":"7cde9a9c-1d79-4400-8830-69f304229886","Type":"ContainerStarted","Data":"8bcc399cadf9d787e87bf180dfd55d9cf1d12fa61cb04a2207941d4eb253f040"} Nov 29 07:09:48 crc kubenswrapper[4731]: I1129 07:09:48.938044 4731 generic.go:334] "Generic (PLEG): container finished" podID="7cde9a9c-1d79-4400-8830-69f304229886" containerID="8bcc399cadf9d787e87bf180dfd55d9cf1d12fa61cb04a2207941d4eb253f040" exitCode=0 Nov 29 07:09:48 crc kubenswrapper[4731]: I1129 07:09:48.938141 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stkcw" event={"ID":"7cde9a9c-1d79-4400-8830-69f304229886","Type":"ContainerDied","Data":"8bcc399cadf9d787e87bf180dfd55d9cf1d12fa61cb04a2207941d4eb253f040"} Nov 29 07:09:48 crc kubenswrapper[4731]: I1129 07:09:48.940492 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4gj" event={"ID":"90d637c3-be0e-49b6-ac5a-5cb721948345","Type":"ContainerStarted","Data":"d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a"} Nov 29 07:09:49 crc kubenswrapper[4731]: I1129 07:09:49.950413 4731 generic.go:334] "Generic (PLEG): container finished" podID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerID="d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a" exitCode=0 Nov 29 07:09:49 crc kubenswrapper[4731]: I1129 07:09:49.950608 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4gj" event={"ID":"90d637c3-be0e-49b6-ac5a-5cb721948345","Type":"ContainerDied","Data":"d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a"} Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.013822 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stkcw" event={"ID":"7cde9a9c-1d79-4400-8830-69f304229886","Type":"ContainerStarted","Data":"fc72c94a75ec57cc2ffaa5e277bd9a0d9b4ce29eb90d9993ce135561c05bf369"} Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.017290 4731 generic.go:334] "Generic (PLEG): container finished" podID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerID="4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171" exitCode=0 Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.017393 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9bpb" event={"ID":"84b54257-ab5f-4f89-8ff2-5f725c4b8662","Type":"ContainerDied","Data":"4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171"} Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.022315 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv68n" event={"ID":"ee11152f-267c-4a04-bd4b-84eec0eff00e","Type":"ContainerStarted","Data":"4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed"} Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.026782 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9x6g" event={"ID":"8519b0da-9e0e-4c34-98b0-cbcb4030af39","Type":"ContainerStarted","Data":"18254a0c7f20919524c3f4d26fbc4870e9904338f84aa29836fe16fad3d80c18"} Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.033910 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4gj" event={"ID":"90d637c3-be0e-49b6-ac5a-5cb721948345","Type":"ContainerStarted","Data":"d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a"} Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.036999 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjmcw" event={"ID":"041c9fb8-1657-4070-8649-0297bbba2df1","Type":"ContainerStarted","Data":"dd7178cd50e720a23c3f11679435194f806c3194edb1b6795eda486a79cdaf16"} Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.039138 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv85m" event={"ID":"d246bdda-5a16-4924-a12a-b29095474226","Type":"ContainerStarted","Data":"346ce47a61c016901108d17b13d5d00f9c8cfe8dcfb8a19a4edc7f79ec44a7e6"} Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.054789 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stkcw" podStartSLOduration=3.21191025 podStartE2EDuration="1m14.054773475s" podCreationTimestamp="2025-11-29 07:08:46 +0000 UTC" firstStartedPulling="2025-11-29 07:08:48.201329583 +0000 UTC m=+167.091690686" lastFinishedPulling="2025-11-29 07:09:59.044192808 +0000 UTC m=+237.934553911" observedRunningTime="2025-11-29 07:10:00.050683883 +0000 UTC m=+238.941044996" watchObservedRunningTime="2025-11-29 07:10:00.054773475 +0000 UTC m=+238.945134578" Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.093032 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n9x6g" podStartSLOduration=4.259897175 podStartE2EDuration="1m15.093002893s" podCreationTimestamp="2025-11-29 07:08:45 +0000 UTC" firstStartedPulling="2025-11-29 07:08:48.211055819 +0000 UTC m=+167.101416922" lastFinishedPulling="2025-11-29 07:09:59.044161537 +0000 UTC m=+237.934522640" observedRunningTime="2025-11-29 07:10:00.078860595 +0000 UTC m=+238.969221708" watchObservedRunningTime="2025-11-29 07:10:00.093002893 +0000 UTC m=+238.983363996" Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.109548 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kjmcw" podStartSLOduration=2.506915589 podStartE2EDuration="1m12.109520925s" podCreationTimestamp="2025-11-29 07:08:48 +0000 UTC" firstStartedPulling="2025-11-29 07:08:49.242919879 +0000 UTC m=+168.133280982" lastFinishedPulling="2025-11-29 07:09:58.845525215 +0000 UTC m=+237.735886318" observedRunningTime="2025-11-29 07:10:00.105377272 +0000 UTC m=+238.995738375" watchObservedRunningTime="2025-11-29 07:10:00.109520925 +0000 UTC m=+238.999882028" Nov 29 07:10:00 crc kubenswrapper[4731]: I1129 07:10:00.194421 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kp4gj" podStartSLOduration=3.382468412 podStartE2EDuration="1m14.19439044s" podCreationTimestamp="2025-11-29 07:08:46 +0000 UTC" firstStartedPulling="2025-11-29 07:08:48.215072329 +0000 UTC m=+167.105433432" lastFinishedPulling="2025-11-29 07:09:59.026994357 +0000 UTC m=+237.917355460" observedRunningTime="2025-11-29 07:10:00.17282128 +0000 UTC m=+239.063182383" watchObservedRunningTime="2025-11-29 07:10:00.19439044 +0000 UTC m=+239.084751543" Nov 29 07:10:01 crc kubenswrapper[4731]: I1129 07:10:01.048821 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9bpb" event={"ID":"84b54257-ab5f-4f89-8ff2-5f725c4b8662","Type":"ContainerStarted","Data":"c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b"} Nov 29 07:10:01 crc kubenswrapper[4731]: I1129 07:10:01.051093 4731 generic.go:334] "Generic (PLEG): container finished" podID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerID="4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed" exitCode=0 Nov 29 07:10:01 crc kubenswrapper[4731]: I1129 07:10:01.051131 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv68n" event={"ID":"ee11152f-267c-4a04-bd4b-84eec0eff00e","Type":"ContainerDied","Data":"4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed"} Nov 29 07:10:01 crc kubenswrapper[4731]: I1129 07:10:01.055260 4731 generic.go:334] "Generic (PLEG): container finished" podID="d246bdda-5a16-4924-a12a-b29095474226" containerID="346ce47a61c016901108d17b13d5d00f9c8cfe8dcfb8a19a4edc7f79ec44a7e6" exitCode=0 Nov 29 07:10:01 crc kubenswrapper[4731]: I1129 07:10:01.055364 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv85m" event={"ID":"d246bdda-5a16-4924-a12a-b29095474226","Type":"ContainerDied","Data":"346ce47a61c016901108d17b13d5d00f9c8cfe8dcfb8a19a4edc7f79ec44a7e6"} Nov 29 07:10:01 crc kubenswrapper[4731]: I1129 07:10:01.080541 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c9bpb" podStartSLOduration=2.726329677 podStartE2EDuration="1m15.080505018s" podCreationTimestamp="2025-11-29 07:08:46 +0000 UTC" firstStartedPulling="2025-11-29 07:08:48.204117519 +0000 UTC m=+167.094478632" lastFinishedPulling="2025-11-29 07:10:00.55829287 +0000 UTC m=+239.448653973" observedRunningTime="2025-11-29 07:10:01.078145703 +0000 UTC m=+239.968506806" watchObservedRunningTime="2025-11-29 07:10:01.080505018 +0000 UTC m=+239.970866131" Nov 29 07:10:02 crc kubenswrapper[4731]: I1129 07:10:02.067596 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv68n" event={"ID":"ee11152f-267c-4a04-bd4b-84eec0eff00e","Type":"ContainerStarted","Data":"e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d"} Nov 29 07:10:03 crc kubenswrapper[4731]: I1129 07:10:03.096337 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gv68n" podStartSLOduration=3.874984696 podStartE2EDuration="1m14.096315994s" podCreationTimestamp="2025-11-29 07:08:49 +0000 UTC" firstStartedPulling="2025-11-29 07:08:51.369696436 +0000 UTC m=+170.260057549" lastFinishedPulling="2025-11-29 07:10:01.591027744 +0000 UTC m=+240.481388847" observedRunningTime="2025-11-29 07:10:03.095140602 +0000 UTC m=+241.985501705" watchObservedRunningTime="2025-11-29 07:10:03.096315994 +0000 UTC m=+241.986677097" Nov 29 07:10:04 crc kubenswrapper[4731]: I1129 07:10:04.085087 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv85m" event={"ID":"d246bdda-5a16-4924-a12a-b29095474226","Type":"ContainerStarted","Data":"c516a9e4500175b36ea9024d71e7ef1390844dfa3caaf1afac55231644de616f"} Nov 29 07:10:04 crc kubenswrapper[4731]: I1129 07:10:04.109782 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hv85m" podStartSLOduration=4.125394938 podStartE2EDuration="1m15.10976224s" podCreationTimestamp="2025-11-29 07:08:49 +0000 UTC" firstStartedPulling="2025-11-29 07:08:51.288031969 +0000 UTC m=+170.178393072" lastFinishedPulling="2025-11-29 07:10:02.272399271 +0000 UTC m=+241.162760374" observedRunningTime="2025-11-29 07:10:04.106926832 +0000 UTC m=+242.997287955" watchObservedRunningTime="2025-11-29 07:10:04.10976224 +0000 UTC m=+243.000123343" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.311811 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.312906 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.515415 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.515512 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.558241 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.560823 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.741340 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.741429 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:10:06 crc kubenswrapper[4731]: I1129 07:10:06.787593 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:10:07 crc kubenswrapper[4731]: I1129 07:10:07.130592 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:10:07 crc kubenswrapper[4731]: I1129 07:10:07.130640 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:10:07 crc kubenswrapper[4731]: I1129 07:10:07.160807 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:10:07 crc kubenswrapper[4731]: I1129 07:10:07.280514 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:10:07 crc kubenswrapper[4731]: I1129 07:10:07.280647 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:10:07 crc kubenswrapper[4731]: I1129 07:10:07.734090 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:10:07 crc kubenswrapper[4731]: I1129 07:10:07.791471 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qg27s"] Nov 29 07:10:08 crc kubenswrapper[4731]: I1129 07:10:08.260647 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:10:08 crc kubenswrapper[4731]: I1129 07:10:08.471810 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:10:08 crc kubenswrapper[4731]: I1129 07:10:08.471890 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:10:08 crc kubenswrapper[4731]: I1129 07:10:08.518587 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:10:08 crc kubenswrapper[4731]: I1129 07:10:08.870633 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c9bpb"] Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.128619 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c9bpb" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerName="registry-server" containerID="cri-o://c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b" gracePeriod=2 Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.190381 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.502955 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.503483 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.551341 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.695186 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-utilities\") pod \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.695341 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssxhl\" (UniqueName: \"kubernetes.io/projected/84b54257-ab5f-4f89-8ff2-5f725c4b8662-kube-api-access-ssxhl\") pod \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.695402 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-catalog-content\") pod \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\" (UID: \"84b54257-ab5f-4f89-8ff2-5f725c4b8662\") " Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.696257 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-utilities" (OuterVolumeSpecName: "utilities") pod "84b54257-ab5f-4f89-8ff2-5f725c4b8662" (UID: "84b54257-ab5f-4f89-8ff2-5f725c4b8662"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.703122 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b54257-ab5f-4f89-8ff2-5f725c4b8662-kube-api-access-ssxhl" (OuterVolumeSpecName: "kube-api-access-ssxhl") pod "84b54257-ab5f-4f89-8ff2-5f725c4b8662" (UID: "84b54257-ab5f-4f89-8ff2-5f725c4b8662"). InnerVolumeSpecName "kube-api-access-ssxhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.764974 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84b54257-ab5f-4f89-8ff2-5f725c4b8662" (UID: "84b54257-ab5f-4f89-8ff2-5f725c4b8662"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.797051 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.797122 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssxhl\" (UniqueName: \"kubernetes.io/projected/84b54257-ab5f-4f89-8ff2-5f725c4b8662-kube-api-access-ssxhl\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.797137 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b54257-ab5f-4f89-8ff2-5f725c4b8662-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:09 crc kubenswrapper[4731]: I1129 07:10:09.869465 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stkcw"] Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.056417 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.057086 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083370 4731 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.083711 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerName="registry-server" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083743 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerName="registry-server" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.083759 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerName="extract-content" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083765 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerName="extract-content" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.083775 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerName="registry-server" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083781 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerName="registry-server" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.083790 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerName="extract-utilities" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083796 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerName="extract-utilities" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.083826 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerName="extract-content" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083834 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerName="extract-content" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.083842 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerName="extract-utilities" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083848 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerName="extract-utilities" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.083855 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee8c954-2d17-4f01-9588-2849b4bb7bf0" containerName="pruner" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083864 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee8c954-2d17-4f01-9588-2849b4bb7bf0" containerName="pruner" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.083998 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee8c954-2d17-4f01-9588-2849b4bb7bf0" containerName="pruner" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.084015 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd91825-ce67-48e7-8c8c-fcd73c025703" containerName="registry-server" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.084028 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerName="registry-server" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.084483 4731 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.084738 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.084852 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d" gracePeriod=15 Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.084878 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5" gracePeriod=15 Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.085012 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38" gracePeriod=15 Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.084970 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81" gracePeriod=15 Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.085013 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9" gracePeriod=15 Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086003 4731 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.086314 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086341 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.086355 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086365 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.086382 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086390 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.086409 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086419 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.086435 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086445 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.086460 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086468 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086628 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086645 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086656 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086670 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086683 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.086815 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086826 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.086935 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.106415 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.114013 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.122758 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.137071 4731 generic.go:334] "Generic (PLEG): container finished" podID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" containerID="c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b" exitCode=0 Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.137171 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9bpb" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.137183 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9bpb" event={"ID":"84b54257-ab5f-4f89-8ff2-5f725c4b8662","Type":"ContainerDied","Data":"c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b"} Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.137254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9bpb" event={"ID":"84b54257-ab5f-4f89-8ff2-5f725c4b8662","Type":"ContainerDied","Data":"417e308722bf9d56f5e1722706af81db5ebab876736ba8407c3e6286681f05fd"} Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.137279 4731 scope.go:117] "RemoveContainer" containerID="c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.138101 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stkcw" podUID="7cde9a9c-1d79-4400-8830-69f304229886" containerName="registry-server" containerID="cri-o://fc72c94a75ec57cc2ffaa5e277bd9a0d9b4ce29eb90d9993ce135561c05bf369" gracePeriod=2 Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.139077 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.57:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-stkcw.187c68a496e6f848 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-stkcw,UID:7cde9a9c-1d79-4400-8830-69f304229886,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:10:10.138036296 +0000 UTC m=+249.028397399,LastTimestamp:2025-11-29 07:10:10.138036296 +0000 UTC m=+249.028397399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.156390 4731 scope.go:117] "RemoveContainer" containerID="4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.187755 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.204991 4731 scope.go:117] "RemoveContainer" containerID="88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.216410 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.216483 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.216661 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.216878 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.216970 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.217042 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.217219 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.217280 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.243951 4731 scope.go:117] "RemoveContainer" containerID="c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.244777 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b\": container with ID starting with c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b not found: ID does not exist" containerID="c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.244841 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b"} err="failed to get container status \"c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b\": rpc error: code = NotFound desc = could not find container \"c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b\": container with ID starting with c7acbc92316ef74b120d1ae96514baa3334c713b5beb2dcb3e6dd3a38c54695b not found: ID does not exist" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.244877 4731 scope.go:117] "RemoveContainer" containerID="4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.245413 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171\": container with ID starting with 4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171 not found: ID does not exist" containerID="4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.245442 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171"} err="failed to get container status \"4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171\": rpc error: code = NotFound desc = could not find container \"4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171\": container with ID starting with 4396488b625703a3fc37e41ce2773aa75a6a5269f97e775b0bc814439cd50171 not found: ID does not exist" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.245462 4731 scope.go:117] "RemoveContainer" containerID="88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7" Nov 29 07:10:10 crc kubenswrapper[4731]: E1129 07:10:10.245818 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7\": container with ID starting with 88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7 not found: ID does not exist" containerID="88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.245848 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7"} err="failed to get container status \"88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7\": rpc error: code = NotFound desc = could not find container \"88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7\": container with ID starting with 88921294b457c4f2b476eaa08fdb2f7d2470e964e4fc35409348b2131de46ca7 not found: ID does not exist" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.319872 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320049 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320078 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320143 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320163 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320194 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320224 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320256 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320310 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.319959 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320367 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320392 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320854 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320894 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320927 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.320946 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.418942 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:10:10 crc kubenswrapper[4731]: I1129 07:10:10.547750 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hv85m" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="registry-server" probeResult="failure" output=< Nov 29 07:10:10 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 29 07:10:10 crc kubenswrapper[4731]: > Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.142510 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"054a2fc89334606136fe4bfd55659aeda4fb0e1ac962c4280269535a0ec21bfd"} Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.156154 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.162184 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.163207 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5" exitCode=0 Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.163245 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38" exitCode=0 Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.163255 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81" exitCode=0 Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.163265 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9" exitCode=2 Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.163352 4731 scope.go:117] "RemoveContainer" containerID="1f488e951e0b022155c26f1b4b73363e8e5b82ab6f14e03f9812fe279ff21d36" Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.168156 4731 generic.go:334] "Generic (PLEG): container finished" podID="80df65af-cffa-42d9-b609-7e90950979e2" containerID="518bec76116451f6aeb719dfd9a574a594bd5fbfe1a946e3347530166803951f" exitCode=0 Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.168257 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"80df65af-cffa-42d9-b609-7e90950979e2","Type":"ContainerDied","Data":"518bec76116451f6aeb719dfd9a574a594bd5fbfe1a946e3347530166803951f"} Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.171477 4731 generic.go:334] "Generic (PLEG): container finished" podID="7cde9a9c-1d79-4400-8830-69f304229886" containerID="fc72c94a75ec57cc2ffaa5e277bd9a0d9b4ce29eb90d9993ce135561c05bf369" exitCode=0 Nov 29 07:10:11 crc kubenswrapper[4731]: I1129 07:10:11.171557 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stkcw" event={"ID":"7cde9a9c-1d79-4400-8830-69f304229886","Type":"ContainerDied","Data":"fc72c94a75ec57cc2ffaa5e277bd9a0d9b4ce29eb90d9993ce135561c05bf369"} Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.239033 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0a05aae236484010ce3b2cc3d0b2416d6751f94f6c50351b19102354e69610c3"} Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.550515 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.557655 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.657302 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-var-lock\") pod \"80df65af-cffa-42d9-b609-7e90950979e2\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.657798 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-utilities\") pod \"7cde9a9c-1d79-4400-8830-69f304229886\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.658007 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-kubelet-dir\") pod \"80df65af-cffa-42d9-b609-7e90950979e2\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.658139 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80df65af-cffa-42d9-b609-7e90950979e2-kube-api-access\") pod \"80df65af-cffa-42d9-b609-7e90950979e2\" (UID: \"80df65af-cffa-42d9-b609-7e90950979e2\") " Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.657503 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-var-lock" (OuterVolumeSpecName: "var-lock") pod "80df65af-cffa-42d9-b609-7e90950979e2" (UID: "80df65af-cffa-42d9-b609-7e90950979e2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.658072 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "80df65af-cffa-42d9-b609-7e90950979e2" (UID: "80df65af-cffa-42d9-b609-7e90950979e2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.658359 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-catalog-content\") pod \"7cde9a9c-1d79-4400-8830-69f304229886\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.658655 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtcwv\" (UniqueName: \"kubernetes.io/projected/7cde9a9c-1d79-4400-8830-69f304229886-kube-api-access-gtcwv\") pod \"7cde9a9c-1d79-4400-8830-69f304229886\" (UID: \"7cde9a9c-1d79-4400-8830-69f304229886\") " Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.659493 4731 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.659049 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-utilities" (OuterVolumeSpecName: "utilities") pod "7cde9a9c-1d79-4400-8830-69f304229886" (UID: "7cde9a9c-1d79-4400-8830-69f304229886"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.666033 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80df65af-cffa-42d9-b609-7e90950979e2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "80df65af-cffa-42d9-b609-7e90950979e2" (UID: "80df65af-cffa-42d9-b609-7e90950979e2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.666099 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cde9a9c-1d79-4400-8830-69f304229886-kube-api-access-gtcwv" (OuterVolumeSpecName: "kube-api-access-gtcwv") pod "7cde9a9c-1d79-4400-8830-69f304229886" (UID: "7cde9a9c-1d79-4400-8830-69f304229886"). InnerVolumeSpecName "kube-api-access-gtcwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.707982 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cde9a9c-1d79-4400-8830-69f304229886" (UID: "7cde9a9c-1d79-4400-8830-69f304229886"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.761456 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtcwv\" (UniqueName: \"kubernetes.io/projected/7cde9a9c-1d79-4400-8830-69f304229886-kube-api-access-gtcwv\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.761507 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.761519 4731 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80df65af-cffa-42d9-b609-7e90950979e2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.761535 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80df65af-cffa-42d9-b609-7e90950979e2-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:12 crc kubenswrapper[4731]: I1129 07:10:12.761544 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cde9a9c-1d79-4400-8830-69f304229886-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.245816 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.245829 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"80df65af-cffa-42d9-b609-7e90950979e2","Type":"ContainerDied","Data":"b89add525edcbe3587451e63dd0a534f55418494f7a0ab1cc6f93f869bc97f95"} Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.245884 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b89add525edcbe3587451e63dd0a534f55418494f7a0ab1cc6f93f869bc97f95" Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.248751 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.252409 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stkcw" Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.252388 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stkcw" event={"ID":"7cde9a9c-1d79-4400-8830-69f304229886","Type":"ContainerDied","Data":"b4b478c5197cc2b20dd6f24a356c906b6ef2a2a9cc77f00c57b7cb3174d923ce"} Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.252797 4731 scope.go:117] "RemoveContainer" containerID="fc72c94a75ec57cc2ffaa5e277bd9a0d9b4ce29eb90d9993ce135561c05bf369" Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.282488 4731 scope.go:117] "RemoveContainer" containerID="8bcc399cadf9d787e87bf180dfd55d9cf1d12fa61cb04a2207941d4eb253f040" Nov 29 07:10:13 crc kubenswrapper[4731]: I1129 07:10:13.301146 4731 scope.go:117] "RemoveContainer" containerID="653ae56570d00c98e604b38f4bdb404043ed72c23b370a470128b5c6da68617a" Nov 29 07:10:14 crc kubenswrapper[4731]: E1129 07:10:14.209749 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.57:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-stkcw.187c68a496e6f848 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-stkcw,UID:7cde9a9c-1d79-4400-8830-69f304229886,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:10:10.138036296 +0000 UTC m=+249.028397399,LastTimestamp:2025-11-29 07:10:10.138036296 +0000 UTC m=+249.028397399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:10:14 crc kubenswrapper[4731]: I1129 07:10:14.262161 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:10:14 crc kubenswrapper[4731]: I1129 07:10:14.263366 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d" exitCode=0 Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.127513 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-gv68n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee11152f-267c-4a04-bd4b-84eec0eff00e\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzkzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-marketplace\"/\"redhat-operators-gv68n\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n/status\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.128083 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.128397 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.128736 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.128985 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.129266 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.136624 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.137874 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.138117 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.138492 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.138988 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.199461 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.200763 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.201313 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.201533 4731 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.201969 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.202900 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.203440 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.204060 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.276306 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.277178 4731 scope.go:117] "RemoveContainer" containerID="ea4dcd3b0a9fda14855ec9e54caa59fbbd8953c958ad8d0abc6ad50d8a9143e5" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.277740 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.297695 4731 scope.go:117] "RemoveContainer" containerID="1277bda1e922e493f1c36b64a4584458ab41cee4b1930d2d6409a585be9e3b38" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.318529 4731 scope.go:117] "RemoveContainer" containerID="1938ff140a7af83ef53896ed4482dd3cf0a5429675ca28291de0915b8da27e81" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.334707 4731 scope.go:117] "RemoveContainer" containerID="b19f3d5d17a1f3552b659627294f3a47d667e2da2d3ee71e44fc876614f2f5a9" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.348846 4731 scope.go:117] "RemoveContainer" containerID="c0b6a87bb787bc87ec1a2b21918e754bda6b0f197bfdae20b010e8464cb9e70d" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.367189 4731 scope.go:117] "RemoveContainer" containerID="34b7d71882ecd45557e02dfe61f572da923732714cd4480848340fbe72dabcd8" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.399925 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.400075 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.400466 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.400494 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.400687 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.400766 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.401128 4731 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.401207 4731 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.401269 4731 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.593741 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.594181 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.594423 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.594802 4731 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.595240 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.595618 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:15 crc kubenswrapper[4731]: I1129 07:10:15.814461 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.567357 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.568369 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.569029 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.569439 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.569780 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.570061 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.570344 4731 status_manager.go:851] "Failed to get status for pod" podUID="d246bdda-5a16-4924-a12a-b29095474226" pod="openshift-marketplace/redhat-operators-hv85m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hv85m\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: E1129 07:10:19.577670 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: E1129 07:10:19.578192 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: E1129 07:10:19.578861 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: E1129 07:10:19.579699 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: E1129 07:10:19.580312 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.580349 4731 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 29 07:10:19 crc kubenswrapper[4731]: E1129 07:10:19.580814 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="200ms" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.611808 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.612816 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.613751 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.614387 4731 status_manager.go:851] "Failed to get status for pod" podUID="d246bdda-5a16-4924-a12a-b29095474226" pod="openshift-marketplace/redhat-operators-hv85m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hv85m\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.614795 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.615054 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: I1129 07:10:19.615280 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:19 crc kubenswrapper[4731]: E1129 07:10:19.781492 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="400ms" Nov 29 07:10:20 crc kubenswrapper[4731]: E1129 07:10:20.182329 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="800ms" Nov 29 07:10:20 crc kubenswrapper[4731]: E1129 07:10:20.984274 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="1.6s" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.808738 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.809019 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.810384 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.812016 4731 status_manager.go:851] "Failed to get status for pod" podUID="d246bdda-5a16-4924-a12a-b29095474226" pod="openshift-marketplace/redhat-operators-hv85m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hv85m\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.812426 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.812875 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.813298 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.813794 4731 status_manager.go:851] "Failed to get status for pod" podUID="d246bdda-5a16-4924-a12a-b29095474226" pod="openshift-marketplace/redhat-operators-hv85m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hv85m\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.814210 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.814598 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.814943 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.815173 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.815830 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.828466 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.828512 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:21 crc kubenswrapper[4731]: E1129 07:10:21.829237 4731 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:21 crc kubenswrapper[4731]: I1129 07:10:21.830011 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:21 crc kubenswrapper[4731]: W1129 07:10:21.867225 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b45769f43dae1e7fa6342f6b84ac11b2a94d3e25df8d77c3c9a62e53d915c899 WatchSource:0}: Error finding container b45769f43dae1e7fa6342f6b84ac11b2a94d3e25df8d77c3c9a62e53d915c899: Status 404 returned error can't find the container with id b45769f43dae1e7fa6342f6b84ac11b2a94d3e25df8d77c3c9a62e53d915c899 Nov 29 07:10:22 crc kubenswrapper[4731]: I1129 07:10:22.323098 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b45769f43dae1e7fa6342f6b84ac11b2a94d3e25df8d77c3c9a62e53d915c899"} Nov 29 07:10:22 crc kubenswrapper[4731]: E1129 07:10:22.585671 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="3.2s" Nov 29 07:10:22 crc kubenswrapper[4731]: I1129 07:10:22.693158 4731 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 07:10:22 crc kubenswrapper[4731]: I1129 07:10:22.693300 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.332432 4731 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f12d93aeec187357fd9c33f2a8be312d9b3dd8d85bd5d4765ac18b71b2cee966" exitCode=0 Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.332496 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f12d93aeec187357fd9c33f2a8be312d9b3dd8d85bd5d4765ac18b71b2cee966"} Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.332895 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.332930 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.333488 4731 status_manager.go:851] "Failed to get status for pod" podUID="d246bdda-5a16-4924-a12a-b29095474226" pod="openshift-marketplace/redhat-operators-hv85m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hv85m\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: E1129 07:10:23.333511 4731 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.334170 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.334751 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.335067 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.335426 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.335774 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.338706 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.338768 4731 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58" exitCode=1 Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.338833 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58"} Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.339591 4731 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.339887 4731 status_manager.go:851] "Failed to get status for pod" podUID="7cde9a9c-1d79-4400-8830-69f304229886" pod="openshift-marketplace/certified-operators-stkcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-stkcw\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.339896 4731 scope.go:117] "RemoveContainer" containerID="6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.340287 4731 status_manager.go:851] "Failed to get status for pod" podUID="80df65af-cffa-42d9-b609-7e90950979e2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.340598 4731 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.340893 4731 status_manager.go:851] "Failed to get status for pod" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" pod="openshift-marketplace/community-operators-c9bpb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c9bpb\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.341346 4731 status_manager.go:851] "Failed to get status for pod" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" pod="openshift-marketplace/redhat-operators-gv68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv68n\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:23 crc kubenswrapper[4731]: I1129 07:10:23.341847 4731 status_manager.go:851] "Failed to get status for pod" podUID="d246bdda-5a16-4924-a12a-b29095474226" pod="openshift-marketplace/redhat-operators-hv85m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hv85m\": dial tcp 38.129.56.57:6443: connect: connection refused" Nov 29 07:10:24 crc kubenswrapper[4731]: E1129 07:10:24.211723 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.57:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-stkcw.187c68a496e6f848 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-stkcw,UID:7cde9a9c-1d79-4400-8830-69f304229886,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:10:10.138036296 +0000 UTC m=+249.028397399,LastTimestamp:2025-11-29 07:10:10.138036296 +0000 UTC m=+249.028397399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:10:24 crc kubenswrapper[4731]: I1129 07:10:24.349358 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"134914b5c4297a7226c01ddd03d5f9bff779e43c31b0b0637d7b600cb58103e7"} Nov 29 07:10:24 crc kubenswrapper[4731]: I1129 07:10:24.352625 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:10:24 crc kubenswrapper[4731]: I1129 07:10:24.352700 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"74dea9a83a7412060bffc5f970458e9ff9e3d4f409e15fdce2e159da38e5ceb8"} Nov 29 07:10:25 crc kubenswrapper[4731]: I1129 07:10:25.365362 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d20ce379edd69865261f02ed53475a3bb82f6a27ddc35db3b0720bec29fa319b"} Nov 29 07:10:25 crc kubenswrapper[4731]: I1129 07:10:25.365435 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"85f6b47af9603422b91757fe31f916d65e1368b38a6edb547df9303ff8fd2d5e"} Nov 29 07:10:25 crc kubenswrapper[4731]: I1129 07:10:25.365451 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d8fef183af3966d2f5a2118f97a3fc724bc29892fd551f6251b32e3adcba4783"} Nov 29 07:10:25 crc kubenswrapper[4731]: I1129 07:10:25.365465 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1a7721a7ef6a1f46659ef0c553451875dffc5c833e9a9d7640ecafe10ce67efe"} Nov 29 07:10:25 crc kubenswrapper[4731]: I1129 07:10:25.365616 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:25 crc kubenswrapper[4731]: I1129 07:10:25.365818 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:25 crc kubenswrapper[4731]: I1129 07:10:25.365855 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:26 crc kubenswrapper[4731]: I1129 07:10:26.831233 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:26 crc kubenswrapper[4731]: I1129 07:10:26.831683 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:26 crc kubenswrapper[4731]: I1129 07:10:26.839037 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]log ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]etcd ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/generic-apiserver-start-informers ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-filter ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-apiextensions-informers ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-apiextensions-controllers ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/crd-informer-synced ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-system-namespaces-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 29 07:10:26 crc kubenswrapper[4731]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/bootstrap-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/start-kube-aggregator-informers ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/apiservice-registration-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/apiservice-discovery-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]autoregister-completion ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/apiservice-openapi-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 29 07:10:26 crc kubenswrapper[4731]: livez check failed Nov 29 07:10:26 crc kubenswrapper[4731]: I1129 07:10:26.839121 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:10:28 crc kubenswrapper[4731]: I1129 07:10:28.335601 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:10:28 crc kubenswrapper[4731]: I1129 07:10:28.336064 4731 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 07:10:28 crc kubenswrapper[4731]: I1129 07:10:28.336119 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 07:10:30 crc kubenswrapper[4731]: I1129 07:10:30.643879 4731 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:31 crc kubenswrapper[4731]: I1129 07:10:31.403508 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:31 crc kubenswrapper[4731]: I1129 07:10:31.404114 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:31 crc kubenswrapper[4731]: I1129 07:10:31.704711 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:10:31 crc kubenswrapper[4731]: I1129 07:10:31.835639 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:31 crc kubenswrapper[4731]: I1129 07:10:31.839920 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6a70fde8-433c-4bef-975f-0227671b0a76" Nov 29 07:10:32 crc kubenswrapper[4731]: I1129 07:10:32.408779 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:32 crc kubenswrapper[4731]: I1129 07:10:32.408818 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:32 crc kubenswrapper[4731]: I1129 07:10:32.414103 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6a70fde8-433c-4bef-975f-0227671b0a76" Nov 29 07:10:32 crc kubenswrapper[4731]: I1129 07:10:32.829296 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" podUID="d639491c-0fbd-44a6-b273-37dcc1e5681d" containerName="oauth-openshift" containerID="cri-o://0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6" gracePeriod=15 Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.244783 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.384671 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-router-certs\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.384748 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-policies\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.384784 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x8wg\" (UniqueName: \"kubernetes.io/projected/d639491c-0fbd-44a6-b273-37dcc1e5681d-kube-api-access-6x8wg\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.384941 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-provider-selection\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385061 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-dir\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385122 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-idp-0-file-data\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385176 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-ocp-branding-template\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385187 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385245 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-service-ca\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385284 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-error\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385307 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-session\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385331 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-trusted-ca-bundle\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385360 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-serving-cert\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385418 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-cliconfig\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385456 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-login\") pod \"d639491c-0fbd-44a6-b273-37dcc1e5681d\" (UID: \"d639491c-0fbd-44a6-b273-37dcc1e5681d\") " Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.385963 4731 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.386804 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.386825 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.386800 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.387283 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.394275 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.394502 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d639491c-0fbd-44a6-b273-37dcc1e5681d-kube-api-access-6x8wg" (OuterVolumeSpecName: "kube-api-access-6x8wg") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "kube-api-access-6x8wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.394847 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.395237 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.395471 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.395909 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.396056 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.396213 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.396738 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d639491c-0fbd-44a6-b273-37dcc1e5681d" (UID: "d639491c-0fbd-44a6-b273-37dcc1e5681d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.417305 4731 generic.go:334] "Generic (PLEG): container finished" podID="d639491c-0fbd-44a6-b273-37dcc1e5681d" containerID="0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6" exitCode=0 Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.417386 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" event={"ID":"d639491c-0fbd-44a6-b273-37dcc1e5681d","Type":"ContainerDied","Data":"0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6"} Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.417401 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.417441 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qg27s" event={"ID":"d639491c-0fbd-44a6-b273-37dcc1e5681d","Type":"ContainerDied","Data":"7201e6afc963536bcfa89bae37c7d3b2c0f4c5fe77a042b6470f16bf62fc67d1"} Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.417469 4731 scope.go:117] "RemoveContainer" containerID="0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.452435 4731 scope.go:117] "RemoveContainer" containerID="0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6" Nov 29 07:10:33 crc kubenswrapper[4731]: E1129 07:10:33.452955 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6\": container with ID starting with 0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6 not found: ID does not exist" containerID="0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.453015 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6"} err="failed to get container status \"0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6\": rpc error: code = NotFound desc = could not find container \"0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6\": container with ID starting with 0ca7fae142a04892114f7bdf9ffb8a35c1f6f8ccb4f5a3fc3a570fa28b2b25c6 not found: ID does not exist" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487590 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487648 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487662 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487679 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487689 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487699 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487711 4731 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487721 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x8wg\" (UniqueName: \"kubernetes.io/projected/d639491c-0fbd-44a6-b273-37dcc1e5681d-kube-api-access-6x8wg\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487732 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487741 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487750 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487759 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:33 crc kubenswrapper[4731]: I1129 07:10:33.487768 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d639491c-0fbd-44a6-b273-37dcc1e5681d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:38 crc kubenswrapper[4731]: I1129 07:10:38.335509 4731 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 07:10:38 crc kubenswrapper[4731]: I1129 07:10:38.337651 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 07:10:40 crc kubenswrapper[4731]: I1129 07:10:40.649959 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 07:10:40 crc kubenswrapper[4731]: I1129 07:10:40.917034 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 07:10:41 crc kubenswrapper[4731]: I1129 07:10:41.417720 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 07:10:41 crc kubenswrapper[4731]: I1129 07:10:41.540669 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:10:41 crc kubenswrapper[4731]: I1129 07:10:41.780623 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 07:10:42 crc kubenswrapper[4731]: I1129 07:10:42.220348 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 07:10:42 crc kubenswrapper[4731]: I1129 07:10:42.330427 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 07:10:42 crc kubenswrapper[4731]: I1129 07:10:42.370063 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 07:10:42 crc kubenswrapper[4731]: I1129 07:10:42.379282 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.086895 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.088947 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.224467 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.455109 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.478394 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.516255 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.540600 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.590737 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.838600 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.876217 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:10:43 crc kubenswrapper[4731]: I1129 07:10:43.926222 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.021590 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.159358 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.193839 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.218304 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.333447 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.360049 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.413421 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.515223 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.536539 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.713813 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.863815 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.898321 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.935189 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.941968 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.954409 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.976058 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:10:44 crc kubenswrapper[4731]: I1129 07:10:44.996674 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.098141 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.112904 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.138844 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.189744 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.191715 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.197994 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.334352 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.486652 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.714283 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.722763 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.800878 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.870787 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.943323 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.957621 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.977374 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 07:10:45 crc kubenswrapper[4731]: I1129 07:10:45.987590 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.112599 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.146903 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.161487 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.191321 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.284292 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.381423 4731 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.417317 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.418074 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.446557 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.527987 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.578017 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.578644 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.619901 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.621303 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.643532 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.730045 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.796275 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.840969 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.882239 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.902024 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.911121 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.913136 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.956674 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.957544 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 07:10:46 crc kubenswrapper[4731]: I1129 07:10:46.973369 4731 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.096947 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.111346 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.150934 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.152855 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.194285 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.280014 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.321342 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.458618 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.620647 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.653987 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.658346 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.707711 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.748896 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.959284 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.962236 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 07:10:47 crc kubenswrapper[4731]: I1129 07:10:47.978327 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.034097 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.065239 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.132228 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.160695 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.160712 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.183547 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.242999 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.327388 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.336175 4731 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.336274 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.336361 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.337259 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"74dea9a83a7412060bffc5f970458e9ff9e3d4f409e15fdce2e159da38e5ceb8"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.337402 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://74dea9a83a7412060bffc5f970458e9ff9e3d4f409e15fdce2e159da38e5ceb8" gracePeriod=30 Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.367875 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.384201 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.398148 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.419694 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.480739 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.544531 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.570372 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.661128 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.778470 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.800452 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.865634 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.881616 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 07:10:48 crc kubenswrapper[4731]: I1129 07:10:48.958975 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.009565 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.079713 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.104197 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.113774 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.195925 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.366469 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.382598 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.448337 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.529512 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.577918 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.643912 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.648241 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.661321 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.785670 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.788628 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.853958 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.864065 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.874202 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 07:10:49 crc kubenswrapper[4731]: I1129 07:10:49.984428 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.041410 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.146285 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.259650 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.277403 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.332478 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.351032 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.468264 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.468813 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.846241 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.907537 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.949779 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 07:10:50 crc kubenswrapper[4731]: I1129 07:10:50.950177 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.067460 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.208116 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.272920 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.327933 4731 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.401639 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.437913 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.455756 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.521578 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.561915 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.600856 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.759001 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.777088 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.899454 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.937523 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.972259 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 07:10:51 crc kubenswrapper[4731]: I1129 07:10:51.983369 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.050307 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.112437 4731 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.114961 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.114940891 podStartE2EDuration="42.114940891s" podCreationTimestamp="2025-11-29 07:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:10:30.84024645 +0000 UTC m=+269.730607563" watchObservedRunningTime="2025-11-29 07:10:52.114940891 +0000 UTC m=+291.005301994" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.118441 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qg27s","openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/certified-operators-stkcw","openshift-marketplace/community-operators-c9bpb"] Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.118530 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5555647bc4-jqrjc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:10:52 crc kubenswrapper[4731]: E1129 07:10:52.118853 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d639491c-0fbd-44a6-b273-37dcc1e5681d" containerName="oauth-openshift" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.118875 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d639491c-0fbd-44a6-b273-37dcc1e5681d" containerName="oauth-openshift" Nov 29 07:10:52 crc kubenswrapper[4731]: E1129 07:10:52.118890 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9a9c-1d79-4400-8830-69f304229886" containerName="extract-utilities" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.118899 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9a9c-1d79-4400-8830-69f304229886" containerName="extract-utilities" Nov 29 07:10:52 crc kubenswrapper[4731]: E1129 07:10:52.118914 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9a9c-1d79-4400-8830-69f304229886" containerName="registry-server" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.118922 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9a9c-1d79-4400-8830-69f304229886" containerName="registry-server" Nov 29 07:10:52 crc kubenswrapper[4731]: E1129 07:10:52.118932 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80df65af-cffa-42d9-b609-7e90950979e2" containerName="installer" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.118939 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="80df65af-cffa-42d9-b609-7e90950979e2" containerName="installer" Nov 29 07:10:52 crc kubenswrapper[4731]: E1129 07:10:52.118965 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9a9c-1d79-4400-8830-69f304229886" containerName="extract-content" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.118972 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9a9c-1d79-4400-8830-69f304229886" containerName="extract-content" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.119102 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d639491c-0fbd-44a6-b273-37dcc1e5681d" containerName="oauth-openshift" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.119130 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cde9a9c-1d79-4400-8830-69f304229886" containerName="registry-server" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.119143 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="80df65af-cffa-42d9-b609-7e90950979e2" containerName="installer" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.119355 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.119398 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a5198aca-b43e-4b59-8ec1-15b9d1cd4f4c" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.119626 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.123285 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.123561 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.123950 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.124184 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.124261 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.124633 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.124850 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.124922 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.124939 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.124944 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.125052 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.125482 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.125686 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.128986 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.131400 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.140955 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.160495 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.163121 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.174213 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.191378 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.194235 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.194205629 podStartE2EDuration="22.194205629s" podCreationTimestamp="2025-11-29 07:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:10:52.185411724 +0000 UTC m=+291.075772847" watchObservedRunningTime="2025-11-29 07:10:52.194205629 +0000 UTC m=+291.084566762" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.254225 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.268877 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082d0154-14ee-450b-a771-c873c7552e7d-audit-dir\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.268979 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq7dq\" (UniqueName: \"kubernetes.io/projected/082d0154-14ee-450b-a771-c873c7552e7d-kube-api-access-cq7dq\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269033 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269060 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269103 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269129 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269159 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269197 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-login\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269226 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-audit-policies\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269259 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269295 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269324 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-error\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269356 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-session\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.269380 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.304923 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.307351 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370277 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-login\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370345 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-audit-policies\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370398 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370437 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370467 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-error\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370501 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-session\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370530 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370554 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082d0154-14ee-450b-a771-c873c7552e7d-audit-dir\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370618 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq7dq\" (UniqueName: \"kubernetes.io/projected/082d0154-14ee-450b-a771-c873c7552e7d-kube-api-access-cq7dq\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370661 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370695 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370732 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370764 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.370794 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.372463 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082d0154-14ee-450b-a771-c873c7552e7d-audit-dir\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.373285 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.373533 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-audit-policies\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.374093 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.374147 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.377264 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.378282 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.379433 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-session\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.379437 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-login\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.379613 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.380244 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-error\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.380880 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.383202 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.383226 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.383717 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/082d0154-14ee-450b-a771-c873c7552e7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.389828 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq7dq\" (UniqueName: \"kubernetes.io/projected/082d0154-14ee-450b-a771-c873c7552e7d-kube-api-access-cq7dq\") pod \"oauth-openshift-5555647bc4-jqrjc\" (UID: \"082d0154-14ee-450b-a771-c873c7552e7d\") " pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.446331 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.510465 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.526949 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.544316 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.597969 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.747965 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5555647bc4-jqrjc"] Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.780882 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.791282 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.796028 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.798484 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.877463 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 07:10:52 crc kubenswrapper[4731]: I1129 07:10:52.977457 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.039630 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.065103 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.224474 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.256386 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.375188 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.409529 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.549287 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.551142 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.562065 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" event={"ID":"082d0154-14ee-450b-a771-c873c7552e7d","Type":"ContainerStarted","Data":"ddb5cd9b02ae80a881c9e8acad2493b541d4aa1e5b497dd5819d746cc2f7b101"} Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.562269 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.562304 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" event={"ID":"082d0154-14ee-450b-a771-c873c7552e7d","Type":"ContainerStarted","Data":"45169fdbfdfe7f5db1efc367699f729b654347ce561852780a860348b9978c10"} Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.569927 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.607277 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5555647bc4-jqrjc" podStartSLOduration=46.60724881 podStartE2EDuration="46.60724881s" podCreationTimestamp="2025-11-29 07:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:10:53.587729342 +0000 UTC m=+292.478090465" watchObservedRunningTime="2025-11-29 07:10:53.60724881 +0000 UTC m=+292.497609913" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.640144 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.742715 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.790643 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.815337 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cde9a9c-1d79-4400-8830-69f304229886" path="/var/lib/kubelet/pods/7cde9a9c-1d79-4400-8830-69f304229886/volumes" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.817059 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b54257-ab5f-4f89-8ff2-5f725c4b8662" path="/var/lib/kubelet/pods/84b54257-ab5f-4f89-8ff2-5f725c4b8662/volumes" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.818124 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d639491c-0fbd-44a6-b273-37dcc1e5681d" path="/var/lib/kubelet/pods/d639491c-0fbd-44a6-b273-37dcc1e5681d/volumes" Nov 29 07:10:53 crc kubenswrapper[4731]: I1129 07:10:53.890051 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.005243 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.156074 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.207519 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.235281 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.237234 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.296196 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.376044 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.430179 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.544100 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.615732 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.736132 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.794411 4731 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.856371 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.857802 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 07:10:54 crc kubenswrapper[4731]: I1129 07:10:54.864605 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.010410 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.032340 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.126442 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.139256 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.188634 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.216508 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.337460 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.381131 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.570545 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.678552 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.744353 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.945125 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 07:10:55 crc kubenswrapper[4731]: I1129 07:10:55.989779 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 07:10:56 crc kubenswrapper[4731]: I1129 07:10:56.202438 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 07:10:56 crc kubenswrapper[4731]: I1129 07:10:56.339895 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 07:10:56 crc kubenswrapper[4731]: I1129 07:10:56.349593 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 07:10:56 crc kubenswrapper[4731]: I1129 07:10:56.522735 4731 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 07:10:56 crc kubenswrapper[4731]: I1129 07:10:56.575438 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 07:10:57 crc kubenswrapper[4731]: I1129 07:10:57.090454 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 07:10:57 crc kubenswrapper[4731]: I1129 07:10:57.105871 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 07:10:57 crc kubenswrapper[4731]: I1129 07:10:57.621050 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 07:10:57 crc kubenswrapper[4731]: I1129 07:10:57.707434 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 07:10:57 crc kubenswrapper[4731]: I1129 07:10:57.865318 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 07:10:57 crc kubenswrapper[4731]: I1129 07:10:57.970319 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 07:10:58 crc kubenswrapper[4731]: I1129 07:10:58.016341 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 07:10:58 crc kubenswrapper[4731]: I1129 07:10:58.082023 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 07:10:58 crc kubenswrapper[4731]: I1129 07:10:58.341469 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 07:10:58 crc kubenswrapper[4731]: I1129 07:10:58.431090 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 07:11:03 crc kubenswrapper[4731]: I1129 07:11:03.402168 4731 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:11:03 crc kubenswrapper[4731]: I1129 07:11:03.403227 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0a05aae236484010ce3b2cc3d0b2416d6751f94f6c50351b19102354e69610c3" gracePeriod=5 Nov 29 07:11:08 crc kubenswrapper[4731]: I1129 07:11:08.656735 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:11:08 crc kubenswrapper[4731]: I1129 07:11:08.657262 4731 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0a05aae236484010ce3b2cc3d0b2416d6751f94f6c50351b19102354e69610c3" exitCode=137 Nov 29 07:11:08 crc kubenswrapper[4731]: I1129 07:11:08.976863 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:11:08 crc kubenswrapper[4731]: I1129 07:11:08.976961 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133183 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133373 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133445 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133476 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133542 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133357 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133519 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133523 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.133935 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.144018 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.234939 4731 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.234984 4731 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.234996 4731 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.235010 4731 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.235019 4731 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.666557 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.667094 4731 scope.go:117] "RemoveContainer" containerID="0a05aae236484010ce3b2cc3d0b2416d6751f94f6c50351b19102354e69610c3" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.667247 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.816505 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.816875 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.829268 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.829316 4731 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="42865164-ae3d-4f6b-8da2-1d6d4f656a35" Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.834904 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:11:09 crc kubenswrapper[4731]: I1129 07:11:09.834975 4731 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="42865164-ae3d-4f6b-8da2-1d6d4f656a35" Nov 29 07:11:17 crc kubenswrapper[4731]: I1129 07:11:17.717991 4731 generic.go:334] "Generic (PLEG): container finished" podID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerID="554048ef46b8c551becfb76f96eecd2c7c6785e00a1739dac1dcc22fc89dd27d" exitCode=0 Nov 29 07:11:17 crc kubenswrapper[4731]: I1129 07:11:17.718079 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" event={"ID":"8f435c3d-3db2-44dc-8a50-ea8f9475daa0","Type":"ContainerDied","Data":"554048ef46b8c551becfb76f96eecd2c7c6785e00a1739dac1dcc22fc89dd27d"} Nov 29 07:11:17 crc kubenswrapper[4731]: I1129 07:11:17.719194 4731 scope.go:117] "RemoveContainer" containerID="554048ef46b8c551becfb76f96eecd2c7c6785e00a1739dac1dcc22fc89dd27d" Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.726813 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.728833 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.728881 4731 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="74dea9a83a7412060bffc5f970458e9ff9e3d4f409e15fdce2e159da38e5ceb8" exitCode=137 Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.728947 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"74dea9a83a7412060bffc5f970458e9ff9e3d4f409e15fdce2e159da38e5ceb8"} Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.728980 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"868a38a448d859151e8fe69200e2d5ac50299cf3aed7b436d5a29b864c868ec2"} Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.728997 4731 scope.go:117] "RemoveContainer" containerID="6101f1cec3dc38ec587fb109d42b09c353cf5ed6d89c9a2b799eb456b821cf58" Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.731215 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" event={"ID":"8f435c3d-3db2-44dc-8a50-ea8f9475daa0","Type":"ContainerStarted","Data":"ea866a9d60d9083013cbba65a5bf8f68f30a7dd24030c955c7c2b18497870e69"} Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.731598 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:11:18 crc kubenswrapper[4731]: I1129 07:11:18.736918 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:11:19 crc kubenswrapper[4731]: I1129 07:11:19.740808 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 29 07:11:21 crc kubenswrapper[4731]: I1129 07:11:21.705373 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:11:24 crc kubenswrapper[4731]: I1129 07:11:24.755960 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gv68n"] Nov 29 07:11:24 crc kubenswrapper[4731]: I1129 07:11:24.756660 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gv68n" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerName="registry-server" containerID="cri-o://e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d" gracePeriod=2 Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.112544 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.276538 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-utilities\") pod \"ee11152f-267c-4a04-bd4b-84eec0eff00e\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.276700 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzkzg\" (UniqueName: \"kubernetes.io/projected/ee11152f-267c-4a04-bd4b-84eec0eff00e-kube-api-access-tzkzg\") pod \"ee11152f-267c-4a04-bd4b-84eec0eff00e\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.276746 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-catalog-content\") pod \"ee11152f-267c-4a04-bd4b-84eec0eff00e\" (UID: \"ee11152f-267c-4a04-bd4b-84eec0eff00e\") " Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.278766 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-utilities" (OuterVolumeSpecName: "utilities") pod "ee11152f-267c-4a04-bd4b-84eec0eff00e" (UID: "ee11152f-267c-4a04-bd4b-84eec0eff00e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.284163 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee11152f-267c-4a04-bd4b-84eec0eff00e-kube-api-access-tzkzg" (OuterVolumeSpecName: "kube-api-access-tzkzg") pod "ee11152f-267c-4a04-bd4b-84eec0eff00e" (UID: "ee11152f-267c-4a04-bd4b-84eec0eff00e"). InnerVolumeSpecName "kube-api-access-tzkzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.378489 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.378537 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzkzg\" (UniqueName: \"kubernetes.io/projected/ee11152f-267c-4a04-bd4b-84eec0eff00e-kube-api-access-tzkzg\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.404897 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee11152f-267c-4a04-bd4b-84eec0eff00e" (UID: "ee11152f-267c-4a04-bd4b-84eec0eff00e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.479957 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee11152f-267c-4a04-bd4b-84eec0eff00e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.784786 4731 generic.go:334] "Generic (PLEG): container finished" podID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerID="e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d" exitCode=0 Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.784880 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv68n" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.784848 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv68n" event={"ID":"ee11152f-267c-4a04-bd4b-84eec0eff00e","Type":"ContainerDied","Data":"e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d"} Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.785033 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv68n" event={"ID":"ee11152f-267c-4a04-bd4b-84eec0eff00e","Type":"ContainerDied","Data":"2a30e558ac8dcade4a2950433ebbf28dc1df2aafe97baffc7afeb3faf9eb7426"} Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.785068 4731 scope.go:117] "RemoveContainer" containerID="e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.804470 4731 scope.go:117] "RemoveContainer" containerID="4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.824770 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gv68n"] Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.830730 4731 scope.go:117] "RemoveContainer" containerID="5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.831005 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gv68n"] Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.855343 4731 scope.go:117] "RemoveContainer" containerID="e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d" Nov 29 07:11:25 crc kubenswrapper[4731]: E1129 07:11:25.857170 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d\": container with ID starting with e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d not found: ID does not exist" containerID="e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.857213 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d"} err="failed to get container status \"e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d\": rpc error: code = NotFound desc = could not find container \"e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d\": container with ID starting with e9476009f63efc74acb040cdbc3c0876ae694febdc7c2d749a6da397bc5cea6d not found: ID does not exist" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.857239 4731 scope.go:117] "RemoveContainer" containerID="4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed" Nov 29 07:11:25 crc kubenswrapper[4731]: E1129 07:11:25.858043 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed\": container with ID starting with 4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed not found: ID does not exist" containerID="4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.858113 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed"} err="failed to get container status \"4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed\": rpc error: code = NotFound desc = could not find container \"4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed\": container with ID starting with 4faede216427633422a99a69172064107f9432bf82d4f97e323e125f262b33ed not found: ID does not exist" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.858162 4731 scope.go:117] "RemoveContainer" containerID="5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed" Nov 29 07:11:25 crc kubenswrapper[4731]: E1129 07:11:25.858697 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed\": container with ID starting with 5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed not found: ID does not exist" containerID="5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed" Nov 29 07:11:25 crc kubenswrapper[4731]: I1129 07:11:25.858737 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed"} err="failed to get container status \"5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed\": rpc error: code = NotFound desc = could not find container \"5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed\": container with ID starting with 5b3e013eaa497bab6d28e822381a1819270b6fc1d266227b461e88f4b1d786ed not found: ID does not exist" Nov 29 07:11:27 crc kubenswrapper[4731]: I1129 07:11:27.843205 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" path="/var/lib/kubelet/pods/ee11152f-267c-4a04-bd4b-84eec0eff00e/volumes" Nov 29 07:11:28 crc kubenswrapper[4731]: I1129 07:11:28.335654 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:11:28 crc kubenswrapper[4731]: I1129 07:11:28.342522 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:11:28 crc kubenswrapper[4731]: I1129 07:11:28.807374 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:11:33 crc kubenswrapper[4731]: I1129 07:11:33.002429 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:11:33 crc kubenswrapper[4731]: I1129 07:11:33.002722 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.167333 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb"] Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.168474 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" podUID="aa040abb-6524-4abd-834f-18b72a623d16" containerName="route-controller-manager" containerID="cri-o://5192bb263495f00e5732a6dd207ca5b69f514e7f8b1dfd6944c5329abe43852e" gracePeriod=30 Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.177356 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4scbk"] Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.177662 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" podUID="914f7ecc-b403-4f7e-9a14-3f56a5a256a9" containerName="controller-manager" containerID="cri-o://50b5f117a4792262b555f6a404bea2ac5bf8be1c611bd576da57941d9f65ddc2" gracePeriod=30 Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.246505 4731 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kwjhb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.246615 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" podUID="aa040abb-6524-4abd-834f-18b72a623d16" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.927920 4731 generic.go:334] "Generic (PLEG): container finished" podID="aa040abb-6524-4abd-834f-18b72a623d16" containerID="5192bb263495f00e5732a6dd207ca5b69f514e7f8b1dfd6944c5329abe43852e" exitCode=0 Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.930382 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" event={"ID":"aa040abb-6524-4abd-834f-18b72a623d16","Type":"ContainerDied","Data":"5192bb263495f00e5732a6dd207ca5b69f514e7f8b1dfd6944c5329abe43852e"} Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.933376 4731 generic.go:334] "Generic (PLEG): container finished" podID="914f7ecc-b403-4f7e-9a14-3f56a5a256a9" containerID="50b5f117a4792262b555f6a404bea2ac5bf8be1c611bd576da57941d9f65ddc2" exitCode=0 Nov 29 07:11:47 crc kubenswrapper[4731]: I1129 07:11:47.933416 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" event={"ID":"914f7ecc-b403-4f7e-9a14-3f56a5a256a9","Type":"ContainerDied","Data":"50b5f117a4792262b555f6a404bea2ac5bf8be1c611bd576da57941d9f65ddc2"} Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.079880 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.083806 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133245 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa040abb-6524-4abd-834f-18b72a623d16-serving-cert\") pod \"aa040abb-6524-4abd-834f-18b72a623d16\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133336 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-client-ca\") pod \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133378 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-config\") pod \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133452 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-config\") pod \"aa040abb-6524-4abd-834f-18b72a623d16\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133488 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-client-ca\") pod \"aa040abb-6524-4abd-834f-18b72a623d16\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133588 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-serving-cert\") pod \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133628 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dzm9\" (UniqueName: \"kubernetes.io/projected/aa040abb-6524-4abd-834f-18b72a623d16-kube-api-access-2dzm9\") pod \"aa040abb-6524-4abd-834f-18b72a623d16\" (UID: \"aa040abb-6524-4abd-834f-18b72a623d16\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133683 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-proxy-ca-bundles\") pod \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.133727 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm98c\" (UniqueName: \"kubernetes.io/projected/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-kube-api-access-vm98c\") pod \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\" (UID: \"914f7ecc-b403-4f7e-9a14-3f56a5a256a9\") " Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.134585 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-client-ca" (OuterVolumeSpecName: "client-ca") pod "aa040abb-6524-4abd-834f-18b72a623d16" (UID: "aa040abb-6524-4abd-834f-18b72a623d16"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.134595 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "914f7ecc-b403-4f7e-9a14-3f56a5a256a9" (UID: "914f7ecc-b403-4f7e-9a14-3f56a5a256a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.134729 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-config" (OuterVolumeSpecName: "config") pod "aa040abb-6524-4abd-834f-18b72a623d16" (UID: "aa040abb-6524-4abd-834f-18b72a623d16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.135203 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-config" (OuterVolumeSpecName: "config") pod "914f7ecc-b403-4f7e-9a14-3f56a5a256a9" (UID: "914f7ecc-b403-4f7e-9a14-3f56a5a256a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.135225 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "914f7ecc-b403-4f7e-9a14-3f56a5a256a9" (UID: "914f7ecc-b403-4f7e-9a14-3f56a5a256a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.140588 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "914f7ecc-b403-4f7e-9a14-3f56a5a256a9" (UID: "914f7ecc-b403-4f7e-9a14-3f56a5a256a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.140750 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa040abb-6524-4abd-834f-18b72a623d16-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aa040abb-6524-4abd-834f-18b72a623d16" (UID: "aa040abb-6524-4abd-834f-18b72a623d16"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.141475 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-kube-api-access-vm98c" (OuterVolumeSpecName: "kube-api-access-vm98c") pod "914f7ecc-b403-4f7e-9a14-3f56a5a256a9" (UID: "914f7ecc-b403-4f7e-9a14-3f56a5a256a9"). InnerVolumeSpecName "kube-api-access-vm98c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.141985 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa040abb-6524-4abd-834f-18b72a623d16-kube-api-access-2dzm9" (OuterVolumeSpecName: "kube-api-access-2dzm9") pod "aa040abb-6524-4abd-834f-18b72a623d16" (UID: "aa040abb-6524-4abd-834f-18b72a623d16"). InnerVolumeSpecName "kube-api-access-2dzm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.234936 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa040abb-6524-4abd-834f-18b72a623d16-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.235500 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.235522 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.235587 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.235600 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa040abb-6524-4abd-834f-18b72a623d16-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.235611 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.235636 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dzm9\" (UniqueName: \"kubernetes.io/projected/aa040abb-6524-4abd-834f-18b72a623d16-kube-api-access-2dzm9\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.235650 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.235684 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm98c\" (UniqueName: \"kubernetes.io/projected/914f7ecc-b403-4f7e-9a14-3f56a5a256a9-kube-api-access-vm98c\") on node \"crc\" DevicePath \"\"" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.805529 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h"] Nov 29 07:11:48 crc kubenswrapper[4731]: E1129 07:11:48.805961 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerName="extract-content" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.805983 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerName="extract-content" Nov 29 07:11:48 crc kubenswrapper[4731]: E1129 07:11:48.805997 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806005 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:11:48 crc kubenswrapper[4731]: E1129 07:11:48.806024 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa040abb-6524-4abd-834f-18b72a623d16" containerName="route-controller-manager" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806033 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa040abb-6524-4abd-834f-18b72a623d16" containerName="route-controller-manager" Nov 29 07:11:48 crc kubenswrapper[4731]: E1129 07:11:48.806044 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerName="extract-utilities" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806051 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerName="extract-utilities" Nov 29 07:11:48 crc kubenswrapper[4731]: E1129 07:11:48.806060 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerName="registry-server" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806066 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerName="registry-server" Nov 29 07:11:48 crc kubenswrapper[4731]: E1129 07:11:48.806080 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914f7ecc-b403-4f7e-9a14-3f56a5a256a9" containerName="controller-manager" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806086 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="914f7ecc-b403-4f7e-9a14-3f56a5a256a9" containerName="controller-manager" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806220 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee11152f-267c-4a04-bd4b-84eec0eff00e" containerName="registry-server" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806234 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806242 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="914f7ecc-b403-4f7e-9a14-3f56a5a256a9" containerName="controller-manager" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806252 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa040abb-6524-4abd-834f-18b72a623d16" containerName="route-controller-manager" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.806802 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.809392 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s"] Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.810408 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.824410 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h"] Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.834060 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s"] Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.941022 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" event={"ID":"aa040abb-6524-4abd-834f-18b72a623d16","Type":"ContainerDied","Data":"5c51c84780b2c9bc32f72cc9ebed940fe4e18fc0eec03c9c033a278bca948789"} Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.941103 4731 scope.go:117] "RemoveContainer" containerID="5192bb263495f00e5732a6dd207ca5b69f514e7f8b1dfd6944c5329abe43852e" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.941297 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946527 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-client-ca\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946605 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-client-ca\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946688 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-config\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946705 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-config\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946729 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnpcl\" (UniqueName: \"kubernetes.io/projected/a22903cc-22e2-4593-b852-1528c07dad76-kube-api-access-bnpcl\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946748 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2th2q\" (UniqueName: \"kubernetes.io/projected/4dd412da-7572-4bc3-b743-0f04a0099868-kube-api-access-2th2q\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946908 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a22903cc-22e2-4593-b852-1528c07dad76-serving-cert\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946954 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd412da-7572-4bc3-b743-0f04a0099868-serving-cert\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.946990 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-proxy-ca-bundles\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.948048 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" event={"ID":"914f7ecc-b403-4f7e-9a14-3f56a5a256a9","Type":"ContainerDied","Data":"e757111360666d4c2f70a18dd10910fa38622008b72dc13feacb763888a3f809"} Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.948173 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4scbk" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.977648 4731 scope.go:117] "RemoveContainer" containerID="50b5f117a4792262b555f6a404bea2ac5bf8be1c611bd576da57941d9f65ddc2" Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.985845 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb"] Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.990512 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwjhb"] Nov 29 07:11:48 crc kubenswrapper[4731]: I1129 07:11:48.999080 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4scbk"] Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.002644 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4scbk"] Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048439 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a22903cc-22e2-4593-b852-1528c07dad76-serving-cert\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048507 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd412da-7572-4bc3-b743-0f04a0099868-serving-cert\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048541 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-proxy-ca-bundles\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048591 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-client-ca\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048619 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-client-ca\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048684 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-config\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048711 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-config\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048739 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnpcl\" (UniqueName: \"kubernetes.io/projected/a22903cc-22e2-4593-b852-1528c07dad76-kube-api-access-bnpcl\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.048768 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2th2q\" (UniqueName: \"kubernetes.io/projected/4dd412da-7572-4bc3-b743-0f04a0099868-kube-api-access-2th2q\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.050099 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-config\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.050266 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-client-ca\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.050657 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-config\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.050832 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-client-ca\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.052246 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-proxy-ca-bundles\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.054765 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a22903cc-22e2-4593-b852-1528c07dad76-serving-cert\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.054943 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd412da-7572-4bc3-b743-0f04a0099868-serving-cert\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.072514 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2th2q\" (UniqueName: \"kubernetes.io/projected/4dd412da-7572-4bc3-b743-0f04a0099868-kube-api-access-2th2q\") pod \"controller-manager-85c8bf77b8-2pn4s\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.075719 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnpcl\" (UniqueName: \"kubernetes.io/projected/a22903cc-22e2-4593-b852-1528c07dad76-kube-api-access-bnpcl\") pod \"route-controller-manager-6ddd799978-rrs4h\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.180040 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.186902 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.425451 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h"] Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.505067 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s"] Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.816655 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914f7ecc-b403-4f7e-9a14-3f56a5a256a9" path="/var/lib/kubelet/pods/914f7ecc-b403-4f7e-9a14-3f56a5a256a9/volumes" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.818258 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa040abb-6524-4abd-834f-18b72a623d16" path="/var/lib/kubelet/pods/aa040abb-6524-4abd-834f-18b72a623d16/volumes" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.955921 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" event={"ID":"a22903cc-22e2-4593-b852-1528c07dad76","Type":"ContainerStarted","Data":"5419138e2fc09e231df83a914c79a390a792fd2cf7449bacab45ab978b00b6a0"} Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.955987 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" event={"ID":"a22903cc-22e2-4593-b852-1528c07dad76","Type":"ContainerStarted","Data":"b4edc31da2d00a86d204846e5262ab90eca8f34e915f1e8b3cbe2c9f1121dec2"} Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.956405 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.959374 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" event={"ID":"4dd412da-7572-4bc3-b743-0f04a0099868","Type":"ContainerStarted","Data":"7fdbf2cb47ce313db4b73a4123562fe21f00e620b6f6c4f6627bba7918512ace"} Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.959969 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.959988 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" event={"ID":"4dd412da-7572-4bc3-b743-0f04a0099868","Type":"ContainerStarted","Data":"0b34deea88913b815e01300970fffcd94976fb4b9905ef30dc325a287fc347e3"} Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.964600 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.979770 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" podStartSLOduration=2.979735662 podStartE2EDuration="2.979735662s" podCreationTimestamp="2025-11-29 07:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:11:49.977019425 +0000 UTC m=+348.867380538" watchObservedRunningTime="2025-11-29 07:11:49.979735662 +0000 UTC m=+348.870096765" Nov 29 07:11:49 crc kubenswrapper[4731]: I1129 07:11:49.997383 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" podStartSLOduration=2.99736386 podStartE2EDuration="2.99736386s" podCreationTimestamp="2025-11-29 07:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:11:49.996478492 +0000 UTC m=+348.886839615" watchObservedRunningTime="2025-11-29 07:11:49.99736386 +0000 UTC m=+348.887724953" Nov 29 07:11:50 crc kubenswrapper[4731]: I1129 07:11:50.140277 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:12:03 crc kubenswrapper[4731]: I1129 07:12:03.003352 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:12:03 crc kubenswrapper[4731]: I1129 07:12:03.004088 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.427817 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c68ls"] Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.429650 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.452733 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c68ls"] Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.518335 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-registry-certificates\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.518401 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-bound-sa-token\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.518436 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.518490 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-registry-tls\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.518525 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.518587 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdbfr\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-kube-api-access-mdbfr\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.518611 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-trusted-ca\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.518649 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.541344 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.620362 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdbfr\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-kube-api-access-mdbfr\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.620414 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-trusted-ca\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.620446 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.620474 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-registry-certificates\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.620498 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-bound-sa-token\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.620528 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.620584 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-registry-tls\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.624427 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-registry-certificates\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.624775 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-trusted-ca\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.626517 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.630645 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-registry-tls\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.631736 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.652522 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-bound-sa-token\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.652881 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdbfr\" (UniqueName: \"kubernetes.io/projected/14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1-kube-api-access-mdbfr\") pod \"image-registry-66df7c8f76-c68ls\" (UID: \"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:18 crc kubenswrapper[4731]: I1129 07:12:18.748450 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:19 crc kubenswrapper[4731]: I1129 07:12:19.207576 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c68ls"] Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.151837 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" event={"ID":"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1","Type":"ContainerStarted","Data":"22a96fa2f7a9c4e12ed0940d2fc8142daaefc2a7aaf9d26726728bc1c2562bce"} Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.152803 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.152821 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" event={"ID":"14b97d2f-d93e-41b5-9bfd-c1e4698cc2c1","Type":"ContainerStarted","Data":"79e4cdcddaf4c3b890036f39caa91e140b3d1c137c3c299e044897cdab099700"} Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.179052 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" podStartSLOduration=2.17903387 podStartE2EDuration="2.17903387s" podCreationTimestamp="2025-11-29 07:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:12:20.175076883 +0000 UTC m=+379.065437996" watchObservedRunningTime="2025-11-29 07:12:20.17903387 +0000 UTC m=+379.069394973" Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.944844 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kp4gj"] Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.954084 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n9x6g"] Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.954386 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n9x6g" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerName="registry-server" containerID="cri-o://18254a0c7f20919524c3f4d26fbc4870e9904338f84aa29836fe16fad3d80c18" gracePeriod=30 Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.965823 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2qgzh"] Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.966162 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" containerID="cri-o://ea866a9d60d9083013cbba65a5bf8f68f30a7dd24030c955c7c2b18497870e69" gracePeriod=30 Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.979314 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjmcw"] Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.979653 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kjmcw" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" containerName="registry-server" containerID="cri-o://dd7178cd50e720a23c3f11679435194f806c3194edb1b6795eda486a79cdaf16" gracePeriod=30 Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.991786 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hv85m"] Nov 29 07:12:20 crc kubenswrapper[4731]: I1129 07:12:20.992166 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hv85m" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="registry-server" containerID="cri-o://c516a9e4500175b36ea9024d71e7ef1390844dfa3caaf1afac55231644de616f" gracePeriod=30 Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.002698 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-82hnv"] Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.003882 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.008671 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-82hnv"] Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.061315 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.061673 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.061834 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsk9\" (UniqueName: \"kubernetes.io/projected/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-kube-api-access-8wsk9\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.162849 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsk9\" (UniqueName: \"kubernetes.io/projected/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-kube-api-access-8wsk9\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.162932 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.162981 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.164947 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.169474 4731 generic.go:334] "Generic (PLEG): container finished" podID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerID="ea866a9d60d9083013cbba65a5bf8f68f30a7dd24030c955c7c2b18497870e69" exitCode=0 Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.169660 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" event={"ID":"8f435c3d-3db2-44dc-8a50-ea8f9475daa0","Type":"ContainerDied","Data":"ea866a9d60d9083013cbba65a5bf8f68f30a7dd24030c955c7c2b18497870e69"} Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.169721 4731 scope.go:117] "RemoveContainer" containerID="554048ef46b8c551becfb76f96eecd2c7c6785e00a1739dac1dcc22fc89dd27d" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.177210 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.188396 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsk9\" (UniqueName: \"kubernetes.io/projected/4325e7fb-0543-4969-8ebb-c2dcf11cc24b-kube-api-access-8wsk9\") pod \"marketplace-operator-79b997595-82hnv\" (UID: \"4325e7fb-0543-4969-8ebb-c2dcf11cc24b\") " pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.205394 4731 generic.go:334] "Generic (PLEG): container finished" podID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerID="18254a0c7f20919524c3f4d26fbc4870e9904338f84aa29836fe16fad3d80c18" exitCode=0 Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.205491 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9x6g" event={"ID":"8519b0da-9e0e-4c34-98b0-cbcb4030af39","Type":"ContainerDied","Data":"18254a0c7f20919524c3f4d26fbc4870e9904338f84aa29836fe16fad3d80c18"} Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.208495 4731 generic.go:334] "Generic (PLEG): container finished" podID="041c9fb8-1657-4070-8649-0297bbba2df1" containerID="dd7178cd50e720a23c3f11679435194f806c3194edb1b6795eda486a79cdaf16" exitCode=0 Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.208582 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjmcw" event={"ID":"041c9fb8-1657-4070-8649-0297bbba2df1","Type":"ContainerDied","Data":"dd7178cd50e720a23c3f11679435194f806c3194edb1b6795eda486a79cdaf16"} Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.217174 4731 generic.go:334] "Generic (PLEG): container finished" podID="d246bdda-5a16-4924-a12a-b29095474226" containerID="c516a9e4500175b36ea9024d71e7ef1390844dfa3caaf1afac55231644de616f" exitCode=0 Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.217871 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv85m" event={"ID":"d246bdda-5a16-4924-a12a-b29095474226","Type":"ContainerDied","Data":"c516a9e4500175b36ea9024d71e7ef1390844dfa3caaf1afac55231644de616f"} Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.218290 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kp4gj" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerName="registry-server" containerID="cri-o://d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a" gracePeriod=30 Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.490644 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.602105 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.675345 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-operator-metrics\") pod \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.675466 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-trusted-ca\") pod \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.675510 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk2jc\" (UniqueName: \"kubernetes.io/projected/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-kube-api-access-gk2jc\") pod \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\" (UID: \"8f435c3d-3db2-44dc-8a50-ea8f9475daa0\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.676610 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8f435c3d-3db2-44dc-8a50-ea8f9475daa0" (UID: "8f435c3d-3db2-44dc-8a50-ea8f9475daa0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.680691 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-kube-api-access-gk2jc" (OuterVolumeSpecName: "kube-api-access-gk2jc") pod "8f435c3d-3db2-44dc-8a50-ea8f9475daa0" (UID: "8f435c3d-3db2-44dc-8a50-ea8f9475daa0"). InnerVolumeSpecName "kube-api-access-gk2jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.682080 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8f435c3d-3db2-44dc-8a50-ea8f9475daa0" (UID: "8f435c3d-3db2-44dc-8a50-ea8f9475daa0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.701440 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.753412 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.772077 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.777053 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-utilities\") pod \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.777104 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6lhf\" (UniqueName: \"kubernetes.io/projected/8519b0da-9e0e-4c34-98b0-cbcb4030af39-kube-api-access-s6lhf\") pod \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.777126 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-catalog-content\") pod \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\" (UID: \"8519b0da-9e0e-4c34-98b0-cbcb4030af39\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.777410 4731 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.777428 4731 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.777440 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk2jc\" (UniqueName: \"kubernetes.io/projected/8f435c3d-3db2-44dc-8a50-ea8f9475daa0-kube-api-access-gk2jc\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.779068 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-utilities" (OuterVolumeSpecName: "utilities") pod "8519b0da-9e0e-4c34-98b0-cbcb4030af39" (UID: "8519b0da-9e0e-4c34-98b0-cbcb4030af39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.786529 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8519b0da-9e0e-4c34-98b0-cbcb4030af39-kube-api-access-s6lhf" (OuterVolumeSpecName: "kube-api-access-s6lhf") pod "8519b0da-9e0e-4c34-98b0-cbcb4030af39" (UID: "8519b0da-9e0e-4c34-98b0-cbcb4030af39"). InnerVolumeSpecName "kube-api-access-s6lhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.863224 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s"] Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.863511 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" podUID="4dd412da-7572-4bc3-b743-0f04a0099868" containerName="controller-manager" containerID="cri-o://7fdbf2cb47ce313db4b73a4123562fe21f00e620b6f6c4f6627bba7918512ace" gracePeriod=30 Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.899825 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-utilities\") pod \"d246bdda-5a16-4924-a12a-b29095474226\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.899945 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9ptn\" (UniqueName: \"kubernetes.io/projected/041c9fb8-1657-4070-8649-0297bbba2df1-kube-api-access-h9ptn\") pod \"041c9fb8-1657-4070-8649-0297bbba2df1\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.900082 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-catalog-content\") pod \"d246bdda-5a16-4924-a12a-b29095474226\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.900142 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-catalog-content\") pod \"041c9fb8-1657-4070-8649-0297bbba2df1\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.900201 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55d2p\" (UniqueName: \"kubernetes.io/projected/d246bdda-5a16-4924-a12a-b29095474226-kube-api-access-55d2p\") pod \"d246bdda-5a16-4924-a12a-b29095474226\" (UID: \"d246bdda-5a16-4924-a12a-b29095474226\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.900244 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-utilities\") pod \"041c9fb8-1657-4070-8649-0297bbba2df1\" (UID: \"041c9fb8-1657-4070-8649-0297bbba2df1\") " Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.901058 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.901084 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6lhf\" (UniqueName: \"kubernetes.io/projected/8519b0da-9e0e-4c34-98b0-cbcb4030af39-kube-api-access-s6lhf\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.909906 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d246bdda-5a16-4924-a12a-b29095474226-kube-api-access-55d2p" (OuterVolumeSpecName: "kube-api-access-55d2p") pod "d246bdda-5a16-4924-a12a-b29095474226" (UID: "d246bdda-5a16-4924-a12a-b29095474226"). InnerVolumeSpecName "kube-api-access-55d2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.911077 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.914364 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-utilities" (OuterVolumeSpecName: "utilities") pod "d246bdda-5a16-4924-a12a-b29095474226" (UID: "d246bdda-5a16-4924-a12a-b29095474226"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.914430 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-utilities" (OuterVolumeSpecName: "utilities") pod "041c9fb8-1657-4070-8649-0297bbba2df1" (UID: "041c9fb8-1657-4070-8649-0297bbba2df1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.926605 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041c9fb8-1657-4070-8649-0297bbba2df1-kube-api-access-h9ptn" (OuterVolumeSpecName: "kube-api-access-h9ptn") pod "041c9fb8-1657-4070-8649-0297bbba2df1" (UID: "041c9fb8-1657-4070-8649-0297bbba2df1"). InnerVolumeSpecName "kube-api-access-h9ptn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.999044 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h"] Nov 29 07:12:21 crc kubenswrapper[4731]: I1129 07:12:21.999322 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" podUID="a22903cc-22e2-4593-b852-1528c07dad76" containerName="route-controller-manager" containerID="cri-o://5419138e2fc09e231df83a914c79a390a792fd2cf7449bacab45ab978b00b6a0" gracePeriod=30 Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.002068 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rffjk\" (UniqueName: \"kubernetes.io/projected/90d637c3-be0e-49b6-ac5a-5cb721948345-kube-api-access-rffjk\") pod \"90d637c3-be0e-49b6-ac5a-5cb721948345\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.002268 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-utilities\") pod \"90d637c3-be0e-49b6-ac5a-5cb721948345\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.002396 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-catalog-content\") pod \"90d637c3-be0e-49b6-ac5a-5cb721948345\" (UID: \"90d637c3-be0e-49b6-ac5a-5cb721948345\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.002724 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55d2p\" (UniqueName: \"kubernetes.io/projected/d246bdda-5a16-4924-a12a-b29095474226-kube-api-access-55d2p\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.002746 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.002759 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.002770 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9ptn\" (UniqueName: \"kubernetes.io/projected/041c9fb8-1657-4070-8649-0297bbba2df1-kube-api-access-h9ptn\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.030661 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "041c9fb8-1657-4070-8649-0297bbba2df1" (UID: "041c9fb8-1657-4070-8649-0297bbba2df1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.031804 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-utilities" (OuterVolumeSpecName: "utilities") pod "90d637c3-be0e-49b6-ac5a-5cb721948345" (UID: "90d637c3-be0e-49b6-ac5a-5cb721948345"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.040392 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d637c3-be0e-49b6-ac5a-5cb721948345-kube-api-access-rffjk" (OuterVolumeSpecName: "kube-api-access-rffjk") pod "90d637c3-be0e-49b6-ac5a-5cb721948345" (UID: "90d637c3-be0e-49b6-ac5a-5cb721948345"). InnerVolumeSpecName "kube-api-access-rffjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.101678 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-82hnv"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.104785 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.104823 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041c9fb8-1657-4070-8649-0297bbba2df1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.104838 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rffjk\" (UniqueName: \"kubernetes.io/projected/90d637c3-be0e-49b6-ac5a-5cb721948345-kube-api-access-rffjk\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.118818 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8519b0da-9e0e-4c34-98b0-cbcb4030af39" (UID: "8519b0da-9e0e-4c34-98b0-cbcb4030af39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.167598 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d246bdda-5a16-4924-a12a-b29095474226" (UID: "d246bdda-5a16-4924-a12a-b29095474226"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.192201 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90d637c3-be0e-49b6-ac5a-5cb721948345" (UID: "90d637c3-be0e-49b6-ac5a-5cb721948345"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.206169 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d246bdda-5a16-4924-a12a-b29095474226-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.206211 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8519b0da-9e0e-4c34-98b0-cbcb4030af39-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.206222 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d637c3-be0e-49b6-ac5a-5cb721948345-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.235541 4731 generic.go:334] "Generic (PLEG): container finished" podID="a22903cc-22e2-4593-b852-1528c07dad76" containerID="5419138e2fc09e231df83a914c79a390a792fd2cf7449bacab45ab978b00b6a0" exitCode=0 Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.235654 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" event={"ID":"a22903cc-22e2-4593-b852-1528c07dad76","Type":"ContainerDied","Data":"5419138e2fc09e231df83a914c79a390a792fd2cf7449bacab45ab978b00b6a0"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.240850 4731 generic.go:334] "Generic (PLEG): container finished" podID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerID="d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a" exitCode=0 Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.240950 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4gj" event={"ID":"90d637c3-be0e-49b6-ac5a-5cb721948345","Type":"ContainerDied","Data":"d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.240987 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4gj" event={"ID":"90d637c3-be0e-49b6-ac5a-5cb721948345","Type":"ContainerDied","Data":"ce40cec9604b5317a9ea9d1e11fafaf53b459383ef383d1d7f25180572c298bc"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.241005 4731 scope.go:117] "RemoveContainer" containerID="d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.241001 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp4gj" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.242945 4731 generic.go:334] "Generic (PLEG): container finished" podID="4dd412da-7572-4bc3-b743-0f04a0099868" containerID="7fdbf2cb47ce313db4b73a4123562fe21f00e620b6f6c4f6627bba7918512ace" exitCode=0 Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.243009 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" event={"ID":"4dd412da-7572-4bc3-b743-0f04a0099868","Type":"ContainerDied","Data":"7fdbf2cb47ce313db4b73a4123562fe21f00e620b6f6c4f6627bba7918512ace"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.251102 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjmcw" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.251119 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjmcw" event={"ID":"041c9fb8-1657-4070-8649-0297bbba2df1","Type":"ContainerDied","Data":"d0ea6fe263abd203e8b0282069a059653877ba418ced82adb4d226504815da2a"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.253836 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hv85m" event={"ID":"d246bdda-5a16-4924-a12a-b29095474226","Type":"ContainerDied","Data":"37aa836b64340e7d882b5ae2904650a27c2f8ce45ebd2517781e6a4d84df2ee8"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.254002 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hv85m" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.258839 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.259720 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2qgzh" event={"ID":"8f435c3d-3db2-44dc-8a50-ea8f9475daa0","Type":"ContainerDied","Data":"cf6a41e232ff2e8802393b7aa4d13aa723658fe49f1b01ed0c49aad89c7a342c"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.262875 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9x6g" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.262857 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9x6g" event={"ID":"8519b0da-9e0e-4c34-98b0-cbcb4030af39","Type":"ContainerDied","Data":"17cd2cd217c1b418cbbe98382ee84088059d93f7f64fa57cb9f47ec0a759eb67"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.267451 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" event={"ID":"4325e7fb-0543-4969-8ebb-c2dcf11cc24b","Type":"ContainerStarted","Data":"52c60567dba7e4f249aa4f507fddc8a9f8db466a093129d5310ec3bfb41b216e"} Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.287223 4731 scope.go:117] "RemoveContainer" containerID="d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.297791 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2qgzh"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.302675 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2qgzh"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.324896 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjmcw"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.330505 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjmcw"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.341976 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hv85m"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.345261 4731 scope.go:117] "RemoveContainer" containerID="394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.346657 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hv85m"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.380125 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kp4gj"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.385609 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kp4gj"] Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.396410 4731 scope.go:117] "RemoveContainer" containerID="d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.398260 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n9x6g"] Nov 29 07:12:22 crc kubenswrapper[4731]: E1129 07:12:22.400498 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a\": container with ID starting with d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a not found: ID does not exist" containerID="d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.400548 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a"} err="failed to get container status \"d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a\": rpc error: code = NotFound desc = could not find container \"d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a\": container with ID starting with d3626ac2f78826d560a0e7bfd56b4e473ff34826ae871cb1c251ca4790a3949a not found: ID does not exist" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.400593 4731 scope.go:117] "RemoveContainer" containerID="d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.402798 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n9x6g"] Nov 29 07:12:22 crc kubenswrapper[4731]: E1129 07:12:22.402929 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a\": container with ID starting with d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a not found: ID does not exist" containerID="d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.402961 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a"} err="failed to get container status \"d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a\": rpc error: code = NotFound desc = could not find container \"d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a\": container with ID starting with d832c2affe4ebafa5db4d505995dd8c698d797bbe1324941f34210dfc387fa9a not found: ID does not exist" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.402980 4731 scope.go:117] "RemoveContainer" containerID="394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa" Nov 29 07:12:22 crc kubenswrapper[4731]: E1129 07:12:22.403472 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa\": container with ID starting with 394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa not found: ID does not exist" containerID="394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.403495 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa"} err="failed to get container status \"394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa\": rpc error: code = NotFound desc = could not find container \"394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa\": container with ID starting with 394cf42f7fa46b2bc8ba4b4ece68dceae5a16008636f561989345d3f7883bafa not found: ID does not exist" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.403513 4731 scope.go:117] "RemoveContainer" containerID="dd7178cd50e720a23c3f11679435194f806c3194edb1b6795eda486a79cdaf16" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.424514 4731 scope.go:117] "RemoveContainer" containerID="8bd59953abb16ec6ff20209c5b172eecb56a7073cf4b6345095a52e4da174e05" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.455124 4731 scope.go:117] "RemoveContainer" containerID="b6befeb92d8bfe154c60218e52ce36ff04238cee434b33e9c3ad3d5875d9d87c" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.493602 4731 scope.go:117] "RemoveContainer" containerID="c516a9e4500175b36ea9024d71e7ef1390844dfa3caaf1afac55231644de616f" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.533378 4731 scope.go:117] "RemoveContainer" containerID="346ce47a61c016901108d17b13d5d00f9c8cfe8dcfb8a19a4edc7f79ec44a7e6" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.551009 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.569257 4731 scope.go:117] "RemoveContainer" containerID="cf0ecb7c2a9237b7793c8279bf5736aba5902ecc0fdcedea0f632ef211c09820" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.582283 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.601362 4731 scope.go:117] "RemoveContainer" containerID="ea866a9d60d9083013cbba65a5bf8f68f30a7dd24030c955c7c2b18497870e69" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615198 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-client-ca\") pod \"a22903cc-22e2-4593-b852-1528c07dad76\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615250 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a22903cc-22e2-4593-b852-1528c07dad76-serving-cert\") pod \"a22903cc-22e2-4593-b852-1528c07dad76\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615293 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-config\") pod \"a22903cc-22e2-4593-b852-1528c07dad76\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615369 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd412da-7572-4bc3-b743-0f04a0099868-serving-cert\") pod \"4dd412da-7572-4bc3-b743-0f04a0099868\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615425 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-proxy-ca-bundles\") pod \"4dd412da-7572-4bc3-b743-0f04a0099868\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615458 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-client-ca\") pod \"4dd412da-7572-4bc3-b743-0f04a0099868\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615624 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-config\") pod \"4dd412da-7572-4bc3-b743-0f04a0099868\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615728 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnpcl\" (UniqueName: \"kubernetes.io/projected/a22903cc-22e2-4593-b852-1528c07dad76-kube-api-access-bnpcl\") pod \"a22903cc-22e2-4593-b852-1528c07dad76\" (UID: \"a22903cc-22e2-4593-b852-1528c07dad76\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.615777 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2th2q\" (UniqueName: \"kubernetes.io/projected/4dd412da-7572-4bc3-b743-0f04a0099868-kube-api-access-2th2q\") pod \"4dd412da-7572-4bc3-b743-0f04a0099868\" (UID: \"4dd412da-7572-4bc3-b743-0f04a0099868\") " Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.616430 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-client-ca" (OuterVolumeSpecName: "client-ca") pod "a22903cc-22e2-4593-b852-1528c07dad76" (UID: "a22903cc-22e2-4593-b852-1528c07dad76"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.616667 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-config" (OuterVolumeSpecName: "config") pod "a22903cc-22e2-4593-b852-1528c07dad76" (UID: "a22903cc-22e2-4593-b852-1528c07dad76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.617011 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-client-ca" (OuterVolumeSpecName: "client-ca") pod "4dd412da-7572-4bc3-b743-0f04a0099868" (UID: "4dd412da-7572-4bc3-b743-0f04a0099868"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.617146 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-config" (OuterVolumeSpecName: "config") pod "4dd412da-7572-4bc3-b743-0f04a0099868" (UID: "4dd412da-7572-4bc3-b743-0f04a0099868"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.617211 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4dd412da-7572-4bc3-b743-0f04a0099868" (UID: "4dd412da-7572-4bc3-b743-0f04a0099868"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.617481 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.617509 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.617523 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22903cc-22e2-4593-b852-1528c07dad76-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.617536 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.617550 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dd412da-7572-4bc3-b743-0f04a0099868-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.624322 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd412da-7572-4bc3-b743-0f04a0099868-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4dd412da-7572-4bc3-b743-0f04a0099868" (UID: "4dd412da-7572-4bc3-b743-0f04a0099868"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.625533 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd412da-7572-4bc3-b743-0f04a0099868-kube-api-access-2th2q" (OuterVolumeSpecName: "kube-api-access-2th2q") pod "4dd412da-7572-4bc3-b743-0f04a0099868" (UID: "4dd412da-7572-4bc3-b743-0f04a0099868"). InnerVolumeSpecName "kube-api-access-2th2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.626707 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22903cc-22e2-4593-b852-1528c07dad76-kube-api-access-bnpcl" (OuterVolumeSpecName: "kube-api-access-bnpcl") pod "a22903cc-22e2-4593-b852-1528c07dad76" (UID: "a22903cc-22e2-4593-b852-1528c07dad76"). InnerVolumeSpecName "kube-api-access-bnpcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.627358 4731 scope.go:117] "RemoveContainer" containerID="18254a0c7f20919524c3f4d26fbc4870e9904338f84aa29836fe16fad3d80c18" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.632193 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22903cc-22e2-4593-b852-1528c07dad76-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a22903cc-22e2-4593-b852-1528c07dad76" (UID: "a22903cc-22e2-4593-b852-1528c07dad76"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.651994 4731 scope.go:117] "RemoveContainer" containerID="6f5f02fa7a2b78a76693c9824adcb59ccf97616abd6688bf19b8df33cf7ced53" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.675221 4731 scope.go:117] "RemoveContainer" containerID="bde0bb088bbcab741e708c7e351d9a810a98d6100252c0a9621511d8bd02e211" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.719414 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2th2q\" (UniqueName: \"kubernetes.io/projected/4dd412da-7572-4bc3-b743-0f04a0099868-kube-api-access-2th2q\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.719458 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a22903cc-22e2-4593-b852-1528c07dad76-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.719468 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd412da-7572-4bc3-b743-0f04a0099868-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:22 crc kubenswrapper[4731]: I1129 07:12:22.719477 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnpcl\" (UniqueName: \"kubernetes.io/projected/a22903cc-22e2-4593-b852-1528c07dad76-kube-api-access-bnpcl\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.279321 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" event={"ID":"4325e7fb-0543-4969-8ebb-c2dcf11cc24b","Type":"ContainerStarted","Data":"d49a56260d923ec967a81a1bfa9906e59343fd9ac3ea2914bf3ee50afeb04bcf"} Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.280846 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.282010 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" event={"ID":"a22903cc-22e2-4593-b852-1528c07dad76","Type":"ContainerDied","Data":"b4edc31da2d00a86d204846e5262ab90eca8f34e915f1e8b3cbe2c9f1121dec2"} Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.282060 4731 scope.go:117] "RemoveContainer" containerID="5419138e2fc09e231df83a914c79a390a792fd2cf7449bacab45ab978b00b6a0" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.282196 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.290886 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.305126 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" event={"ID":"4dd412da-7572-4bc3-b743-0f04a0099868","Type":"ContainerDied","Data":"0b34deea88913b815e01300970fffcd94976fb4b9905ef30dc325a287fc347e3"} Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.305147 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.314301 4731 scope.go:117] "RemoveContainer" containerID="7fdbf2cb47ce313db4b73a4123562fe21f00e620b6f6c4f6627bba7918512ace" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.317916 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-82hnv" podStartSLOduration=3.317870416 podStartE2EDuration="3.317870416s" podCreationTimestamp="2025-11-29 07:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:12:23.312505514 +0000 UTC m=+382.202866627" watchObservedRunningTime="2025-11-29 07:12:23.317870416 +0000 UTC m=+382.208231529" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.334575 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.339519 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddd799978-rrs4h"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397164 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdtsp"] Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397644 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerName="extract-utilities" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397662 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerName="extract-utilities" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397673 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" containerName="extract-content" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397680 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" containerName="extract-content" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397690 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397697 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397710 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerName="extract-utilities" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397718 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerName="extract-utilities" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397728 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397735 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397746 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" containerName="extract-utilities" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397753 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" containerName="extract-utilities" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397762 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22903cc-22e2-4593-b852-1528c07dad76" containerName="route-controller-manager" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397770 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22903cc-22e2-4593-b852-1528c07dad76" containerName="route-controller-manager" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397780 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397787 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397794 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerName="extract-content" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397802 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerName="extract-content" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397816 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397823 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397834 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397841 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397852 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="extract-utilities" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397859 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="extract-utilities" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397867 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd412da-7572-4bc3-b743-0f04a0099868" containerName="controller-manager" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397875 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd412da-7572-4bc3-b743-0f04a0099868" containerName="controller-manager" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397885 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerName="extract-content" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397896 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerName="extract-content" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397905 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397912 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: E1129 07:12:23.397923 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="extract-content" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.397930 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="extract-content" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.398095 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.398134 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" containerName="marketplace-operator" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.398149 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.398165 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.398173 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.398350 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d246bdda-5a16-4924-a12a-b29095474226" containerName="registry-server" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.398379 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22903cc-22e2-4593-b852-1528c07dad76" containerName="route-controller-manager" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.398393 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd412da-7572-4bc3-b743-0f04a0099868" containerName="controller-manager" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.400963 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.402862 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.405520 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.407559 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdtsp"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.412602 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85c8bf77b8-2pn4s"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.431030 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70753c3-39f0-4d12-aff8-c25213451bb5-utilities\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.431331 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpncj\" (UniqueName: \"kubernetes.io/projected/e70753c3-39f0-4d12-aff8-c25213451bb5-kube-api-access-wpncj\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.431623 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70753c3-39f0-4d12-aff8-c25213451bb5-catalog-content\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.533291 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70753c3-39f0-4d12-aff8-c25213451bb5-utilities\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.533714 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpncj\" (UniqueName: \"kubernetes.io/projected/e70753c3-39f0-4d12-aff8-c25213451bb5-kube-api-access-wpncj\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.534272 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70753c3-39f0-4d12-aff8-c25213451bb5-catalog-content\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.534776 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70753c3-39f0-4d12-aff8-c25213451bb5-catalog-content\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.534024 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70753c3-39f0-4d12-aff8-c25213451bb5-utilities\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.558025 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpncj\" (UniqueName: \"kubernetes.io/projected/e70753c3-39f0-4d12-aff8-c25213451bb5-kube-api-access-wpncj\") pod \"certified-operators-fdtsp\" (UID: \"e70753c3-39f0-4d12-aff8-c25213451bb5\") " pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.564303 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t7tn5"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.565985 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.570926 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.580407 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7tn5"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.635824 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30728f36-1a1f-4d10-9a28-c50eca791478-catalog-content\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.636451 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv9z6\" (UniqueName: \"kubernetes.io/projected/30728f36-1a1f-4d10-9a28-c50eca791478-kube-api-access-hv9z6\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.636894 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30728f36-1a1f-4d10-9a28-c50eca791478-utilities\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.730812 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.738545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30728f36-1a1f-4d10-9a28-c50eca791478-catalog-content\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.738623 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv9z6\" (UniqueName: \"kubernetes.io/projected/30728f36-1a1f-4d10-9a28-c50eca791478-kube-api-access-hv9z6\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.738703 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30728f36-1a1f-4d10-9a28-c50eca791478-utilities\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.739274 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30728f36-1a1f-4d10-9a28-c50eca791478-utilities\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.739376 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30728f36-1a1f-4d10-9a28-c50eca791478-catalog-content\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.763337 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv9z6\" (UniqueName: \"kubernetes.io/projected/30728f36-1a1f-4d10-9a28-c50eca791478-kube-api-access-hv9z6\") pod \"redhat-operators-t7tn5\" (UID: \"30728f36-1a1f-4d10-9a28-c50eca791478\") " pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.841421 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041c9fb8-1657-4070-8649-0297bbba2df1" path="/var/lib/kubelet/pods/041c9fb8-1657-4070-8649-0297bbba2df1/volumes" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.843466 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd412da-7572-4bc3-b743-0f04a0099868" path="/var/lib/kubelet/pods/4dd412da-7572-4bc3-b743-0f04a0099868/volumes" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.846714 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8519b0da-9e0e-4c34-98b0-cbcb4030af39" path="/var/lib/kubelet/pods/8519b0da-9e0e-4c34-98b0-cbcb4030af39/volumes" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.847636 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f435c3d-3db2-44dc-8a50-ea8f9475daa0" path="/var/lib/kubelet/pods/8f435c3d-3db2-44dc-8a50-ea8f9475daa0/volumes" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.848246 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d637c3-be0e-49b6-ac5a-5cb721948345" path="/var/lib/kubelet/pods/90d637c3-be0e-49b6-ac5a-5cb721948345/volumes" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.855967 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22903cc-22e2-4593-b852-1528c07dad76" path="/var/lib/kubelet/pods/a22903cc-22e2-4593-b852-1528c07dad76/volumes" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.858217 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d246bdda-5a16-4924-a12a-b29095474226" path="/var/lib/kubelet/pods/d246bdda-5a16-4924-a12a-b29095474226/volumes" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.861038 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6688747b6-9kvzr"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.861731 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.862088 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.862172 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.862251 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.866481 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6688747b6-9kvzr"] Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.876490 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.876718 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.876857 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.876965 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.877102 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.877160 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.877261 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.877286 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.877893 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.881217 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.881480 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.881647 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.885116 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.897063 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.944601 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-serving-cert\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.944659 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-proxy-ca-bundles\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.944705 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nz45\" (UniqueName: \"kubernetes.io/projected/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-kube-api-access-5nz45\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.945062 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-config\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.945165 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9npq\" (UniqueName: \"kubernetes.io/projected/09988d06-e38d-4cd2-b800-d6dade5f594c-kube-api-access-q9npq\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.945401 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09988d06-e38d-4cd2-b800-d6dade5f594c-serving-cert\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.945437 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-client-ca\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.945628 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-config\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.946049 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-client-ca\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:23 crc kubenswrapper[4731]: I1129 07:12:23.973481 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdtsp"] Nov 29 07:12:23 crc kubenswrapper[4731]: W1129 07:12:23.982316 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode70753c3_39f0_4d12_aff8_c25213451bb5.slice/crio-e9619c62b1f957183b1c1decf1f7d46eef4ae15bb81ef9eb294d046a08b2a345 WatchSource:0}: Error finding container e9619c62b1f957183b1c1decf1f7d46eef4ae15bb81ef9eb294d046a08b2a345: Status 404 returned error can't find the container with id e9619c62b1f957183b1c1decf1f7d46eef4ae15bb81ef9eb294d046a08b2a345 Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048157 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nz45\" (UniqueName: \"kubernetes.io/projected/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-kube-api-access-5nz45\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048215 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-config\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048240 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9npq\" (UniqueName: \"kubernetes.io/projected/09988d06-e38d-4cd2-b800-d6dade5f594c-kube-api-access-q9npq\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048292 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09988d06-e38d-4cd2-b800-d6dade5f594c-serving-cert\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048317 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-client-ca\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048348 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-config\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048390 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-client-ca\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048411 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-serving-cert\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.048434 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-proxy-ca-bundles\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.050871 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-proxy-ca-bundles\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.051060 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-client-ca\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.051324 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-config\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.055340 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-client-ca\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.056072 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09988d06-e38d-4cd2-b800-d6dade5f594c-config\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.057593 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-serving-cert\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.058328 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09988d06-e38d-4cd2-b800-d6dade5f594c-serving-cert\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.068016 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nz45\" (UniqueName: \"kubernetes.io/projected/5dfcdd76-50a4-4e2a-8830-ffef4248ef00-kube-api-access-5nz45\") pod \"route-controller-manager-7d5d4cc866-9rqrn\" (UID: \"5dfcdd76-50a4-4e2a-8830-ffef4248ef00\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.070078 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9npq\" (UniqueName: \"kubernetes.io/projected/09988d06-e38d-4cd2-b800-d6dade5f594c-kube-api-access-q9npq\") pod \"controller-manager-6688747b6-9kvzr\" (UID: \"09988d06-e38d-4cd2-b800-d6dade5f594c\") " pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.188462 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.202143 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.345556 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7tn5"] Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.382717 4731 generic.go:334] "Generic (PLEG): container finished" podID="e70753c3-39f0-4d12-aff8-c25213451bb5" containerID="96cd5ec08f06fd694db1439aaa45b8460486678e6d5dfc1dec24afad3fe8cd35" exitCode=0 Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.383264 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtsp" event={"ID":"e70753c3-39f0-4d12-aff8-c25213451bb5","Type":"ContainerDied","Data":"96cd5ec08f06fd694db1439aaa45b8460486678e6d5dfc1dec24afad3fe8cd35"} Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.383493 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtsp" event={"ID":"e70753c3-39f0-4d12-aff8-c25213451bb5","Type":"ContainerStarted","Data":"e9619c62b1f957183b1c1decf1f7d46eef4ae15bb81ef9eb294d046a08b2a345"} Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.472232 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6688747b6-9kvzr"] Nov 29 07:12:24 crc kubenswrapper[4731]: W1129 07:12:24.475938 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09988d06_e38d_4cd2_b800_d6dade5f594c.slice/crio-65fce7ec5e22a2c7cbd505cbe9a8ce56b86933555f1f5e50cc248303067538e1 WatchSource:0}: Error finding container 65fce7ec5e22a2c7cbd505cbe9a8ce56b86933555f1f5e50cc248303067538e1: Status 404 returned error can't find the container with id 65fce7ec5e22a2c7cbd505cbe9a8ce56b86933555f1f5e50cc248303067538e1 Nov 29 07:12:24 crc kubenswrapper[4731]: I1129 07:12:24.504864 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn"] Nov 29 07:12:24 crc kubenswrapper[4731]: W1129 07:12:24.508996 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dfcdd76_50a4_4e2a_8830_ffef4248ef00.slice/crio-038c99636e1f1071779ead9b10221445e75681fe8312fa7fb324cd9a8a308a9b WatchSource:0}: Error finding container 038c99636e1f1071779ead9b10221445e75681fe8312fa7fb324cd9a8a308a9b: Status 404 returned error can't find the container with id 038c99636e1f1071779ead9b10221445e75681fe8312fa7fb324cd9a8a308a9b Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.391701 4731 generic.go:334] "Generic (PLEG): container finished" podID="30728f36-1a1f-4d10-9a28-c50eca791478" containerID="34edddab7ffb554c5bef020f75b4d55e5cfd14dcf30aa5156dc15572a2f0114a" exitCode=0 Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.391751 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7tn5" event={"ID":"30728f36-1a1f-4d10-9a28-c50eca791478","Type":"ContainerDied","Data":"34edddab7ffb554c5bef020f75b4d55e5cfd14dcf30aa5156dc15572a2f0114a"} Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.392206 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7tn5" event={"ID":"30728f36-1a1f-4d10-9a28-c50eca791478","Type":"ContainerStarted","Data":"d1a69f2a6efccc3239512ec18c349b0683f301a4e6b27cfb60d550ace911e7ac"} Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.393906 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" event={"ID":"09988d06-e38d-4cd2-b800-d6dade5f594c","Type":"ContainerStarted","Data":"bc4ba5fdc4e71d1b24f87a8ad16cee69b01c013ee5e18414fa66ebc96e1bce4e"} Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.393961 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" event={"ID":"09988d06-e38d-4cd2-b800-d6dade5f594c","Type":"ContainerStarted","Data":"65fce7ec5e22a2c7cbd505cbe9a8ce56b86933555f1f5e50cc248303067538e1"} Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.394087 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.396304 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtsp" event={"ID":"e70753c3-39f0-4d12-aff8-c25213451bb5","Type":"ContainerStarted","Data":"94e55d1fe50b748defc9019cea2e94dfac1b216a5c573b2f567f1b9af1be927f"} Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.399281 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" event={"ID":"5dfcdd76-50a4-4e2a-8830-ffef4248ef00","Type":"ContainerStarted","Data":"3f7fca3eec9cfe3f69b1992941cc7fbefab5121f74a15b7551ed7b1dd3c2ee46"} Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.399331 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" event={"ID":"5dfcdd76-50a4-4e2a-8830-ffef4248ef00","Type":"ContainerStarted","Data":"038c99636e1f1071779ead9b10221445e75681fe8312fa7fb324cd9a8a308a9b"} Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.399532 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.404802 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.407364 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.437225 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d5d4cc866-9rqrn" podStartSLOduration=3.437198929 podStartE2EDuration="3.437198929s" podCreationTimestamp="2025-11-29 07:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:12:25.43412979 +0000 UTC m=+384.324490893" watchObservedRunningTime="2025-11-29 07:12:25.437198929 +0000 UTC m=+384.327560032" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.518286 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6688747b6-9kvzr" podStartSLOduration=4.51826897 podStartE2EDuration="4.51826897s" podCreationTimestamp="2025-11-29 07:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:12:25.486777946 +0000 UTC m=+384.377139059" watchObservedRunningTime="2025-11-29 07:12:25.51826897 +0000 UTC m=+384.408630073" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.770080 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-shxb6"] Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.772672 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.777089 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.781097 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shxb6"] Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.876685 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-utilities\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.876740 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-catalog-content\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.877097 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwc8w\" (UniqueName: \"kubernetes.io/projected/406d98ce-84d9-4d8a-8567-9b82123cf323-kube-api-access-pwc8w\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.972735 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qj69f"] Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.974552 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.976682 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj69f"] Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.977641 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.978990 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-utilities\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.979063 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-catalog-content\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.979144 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwc8w\" (UniqueName: \"kubernetes.io/projected/406d98ce-84d9-4d8a-8567-9b82123cf323-kube-api-access-pwc8w\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.979637 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-utilities\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:25 crc kubenswrapper[4731]: I1129 07:12:25.979745 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-catalog-content\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.008933 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwc8w\" (UniqueName: \"kubernetes.io/projected/406d98ce-84d9-4d8a-8567-9b82123cf323-kube-api-access-pwc8w\") pod \"community-operators-shxb6\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.080789 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-utilities\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.080875 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-catalog-content\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.081054 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plctk\" (UniqueName: \"kubernetes.io/projected/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-kube-api-access-plctk\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.098319 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.182202 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plctk\" (UniqueName: \"kubernetes.io/projected/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-kube-api-access-plctk\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.182835 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-utilities\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.182898 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-catalog-content\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.183298 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-utilities\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.183336 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-catalog-content\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.211633 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plctk\" (UniqueName: \"kubernetes.io/projected/8eb37715-795e-4a3a-89c3-caa27a8ad0fc-kube-api-access-plctk\") pod \"redhat-marketplace-qj69f\" (UID: \"8eb37715-795e-4a3a-89c3-caa27a8ad0fc\") " pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.364115 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shxb6"] Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.391728 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.412244 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7tn5" event={"ID":"30728f36-1a1f-4d10-9a28-c50eca791478","Type":"ContainerStarted","Data":"6f79f029266a03df3fc3b95fede1b5c1b56b065a00188709ad3fd038d2856ff0"} Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.414510 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shxb6" event={"ID":"406d98ce-84d9-4d8a-8567-9b82123cf323","Type":"ContainerStarted","Data":"cb8f1a06d48dcb711578eccc3aba145793ebc8ca8e78622f960b14cbfa781173"} Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.418079 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtsp" event={"ID":"e70753c3-39f0-4d12-aff8-c25213451bb5","Type":"ContainerDied","Data":"94e55d1fe50b748defc9019cea2e94dfac1b216a5c573b2f567f1b9af1be927f"} Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.418181 4731 generic.go:334] "Generic (PLEG): container finished" podID="e70753c3-39f0-4d12-aff8-c25213451bb5" containerID="94e55d1fe50b748defc9019cea2e94dfac1b216a5c573b2f567f1b9af1be927f" exitCode=0 Nov 29 07:12:26 crc kubenswrapper[4731]: I1129 07:12:26.885465 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj69f"] Nov 29 07:12:27 crc kubenswrapper[4731]: E1129 07:12:27.177284 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eb37715_795e_4a3a_89c3_caa27a8ad0fc.slice/crio-conmon-6a2081b1626f51391269908fb176c745e0250dd2562d7d945fa49efe5d4c79cf.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.425431 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtsp" event={"ID":"e70753c3-39f0-4d12-aff8-c25213451bb5","Type":"ContainerStarted","Data":"59d4dd0a2b2ce564fab609dc40e6cd38bf7c881da0d215137091a236da04754d"} Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.429029 4731 generic.go:334] "Generic (PLEG): container finished" podID="30728f36-1a1f-4d10-9a28-c50eca791478" containerID="6f79f029266a03df3fc3b95fede1b5c1b56b065a00188709ad3fd038d2856ff0" exitCode=0 Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.429085 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7tn5" event={"ID":"30728f36-1a1f-4d10-9a28-c50eca791478","Type":"ContainerDied","Data":"6f79f029266a03df3fc3b95fede1b5c1b56b065a00188709ad3fd038d2856ff0"} Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.431585 4731 generic.go:334] "Generic (PLEG): container finished" podID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerID="2d095e24465a47a557f6b0649ead14b72707bddf93bcc853f5d32f78892c782c" exitCode=0 Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.431631 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shxb6" event={"ID":"406d98ce-84d9-4d8a-8567-9b82123cf323","Type":"ContainerDied","Data":"2d095e24465a47a557f6b0649ead14b72707bddf93bcc853f5d32f78892c782c"} Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.439877 4731 generic.go:334] "Generic (PLEG): container finished" podID="8eb37715-795e-4a3a-89c3-caa27a8ad0fc" containerID="6a2081b1626f51391269908fb176c745e0250dd2562d7d945fa49efe5d4c79cf" exitCode=0 Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.440022 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj69f" event={"ID":"8eb37715-795e-4a3a-89c3-caa27a8ad0fc","Type":"ContainerDied","Data":"6a2081b1626f51391269908fb176c745e0250dd2562d7d945fa49efe5d4c79cf"} Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.440053 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj69f" event={"ID":"8eb37715-795e-4a3a-89c3-caa27a8ad0fc","Type":"ContainerStarted","Data":"f6b23e3973acadd0b7bc80bffb879b1c978e79d66b4c67e646ff82d360c5322e"} Nov 29 07:12:27 crc kubenswrapper[4731]: I1129 07:12:27.452242 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdtsp" podStartSLOduration=1.7805617919999999 podStartE2EDuration="4.452223403s" podCreationTimestamp="2025-11-29 07:12:23 +0000 UTC" firstStartedPulling="2025-11-29 07:12:24.38675695 +0000 UTC m=+383.277118063" lastFinishedPulling="2025-11-29 07:12:27.058418581 +0000 UTC m=+385.948779674" observedRunningTime="2025-11-29 07:12:27.449715652 +0000 UTC m=+386.340076755" watchObservedRunningTime="2025-11-29 07:12:27.452223403 +0000 UTC m=+386.342584516" Nov 29 07:12:28 crc kubenswrapper[4731]: I1129 07:12:28.461266 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7tn5" event={"ID":"30728f36-1a1f-4d10-9a28-c50eca791478","Type":"ContainerStarted","Data":"6d24db3c1d138d2187a562dd5442c0af57bf26d90253205f92855c7915f81b6c"} Nov 29 07:12:28 crc kubenswrapper[4731]: I1129 07:12:28.465967 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj69f" event={"ID":"8eb37715-795e-4a3a-89c3-caa27a8ad0fc","Type":"ContainerStarted","Data":"cb06a1a4fa45a9fac09c4a9cf4ab4d32a913a23a10acddf4238d5c92238413ee"} Nov 29 07:12:28 crc kubenswrapper[4731]: I1129 07:12:28.487435 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t7tn5" podStartSLOduration=2.995610052 podStartE2EDuration="5.487414651s" podCreationTimestamp="2025-11-29 07:12:23 +0000 UTC" firstStartedPulling="2025-11-29 07:12:25.39343828 +0000 UTC m=+384.283799383" lastFinishedPulling="2025-11-29 07:12:27.885242879 +0000 UTC m=+386.775603982" observedRunningTime="2025-11-29 07:12:28.485809329 +0000 UTC m=+387.376170432" watchObservedRunningTime="2025-11-29 07:12:28.487414651 +0000 UTC m=+387.377775754" Nov 29 07:12:29 crc kubenswrapper[4731]: I1129 07:12:29.478335 4731 generic.go:334] "Generic (PLEG): container finished" podID="8eb37715-795e-4a3a-89c3-caa27a8ad0fc" containerID="cb06a1a4fa45a9fac09c4a9cf4ab4d32a913a23a10acddf4238d5c92238413ee" exitCode=0 Nov 29 07:12:29 crc kubenswrapper[4731]: I1129 07:12:29.478433 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj69f" event={"ID":"8eb37715-795e-4a3a-89c3-caa27a8ad0fc","Type":"ContainerDied","Data":"cb06a1a4fa45a9fac09c4a9cf4ab4d32a913a23a10acddf4238d5c92238413ee"} Nov 29 07:12:30 crc kubenswrapper[4731]: I1129 07:12:30.494465 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj69f" event={"ID":"8eb37715-795e-4a3a-89c3-caa27a8ad0fc","Type":"ContainerStarted","Data":"198f8de367599b33684c4b53eb5d2a4a1721f823ff52da045da08562c4580806"} Nov 29 07:12:30 crc kubenswrapper[4731]: I1129 07:12:30.519626 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qj69f" podStartSLOduration=3.030006501 podStartE2EDuration="5.519610468s" podCreationTimestamp="2025-11-29 07:12:25 +0000 UTC" firstStartedPulling="2025-11-29 07:12:27.441459227 +0000 UTC m=+386.331820330" lastFinishedPulling="2025-11-29 07:12:29.931063194 +0000 UTC m=+388.821424297" observedRunningTime="2025-11-29 07:12:30.517870302 +0000 UTC m=+389.408231405" watchObservedRunningTime="2025-11-29 07:12:30.519610468 +0000 UTC m=+389.409971571" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.002938 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.003620 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.003699 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.004692 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca99db39a60fe421bcd1cc3436c5d0f329f6d5a18c512d839a8790b1dc8cf430"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.004784 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://ca99db39a60fe421bcd1cc3436c5d0f329f6d5a18c512d839a8790b1dc8cf430" gracePeriod=600 Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.517533 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="ca99db39a60fe421bcd1cc3436c5d0f329f6d5a18c512d839a8790b1dc8cf430" exitCode=0 Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.517629 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"ca99db39a60fe421bcd1cc3436c5d0f329f6d5a18c512d839a8790b1dc8cf430"} Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.518550 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"fabc326abb67dfad70071a4d4d3b7bda47a1d8464435cc73fe9ab0fd38194477"} Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.518615 4731 scope.go:117] "RemoveContainer" containerID="c2ffc93896b04d748f6dfda932202450e20bc7b298cc0d79c0c6499ead481d8c" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.522759 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shxb6" event={"ID":"406d98ce-84d9-4d8a-8567-9b82123cf323","Type":"ContainerStarted","Data":"ba747959269a7c4ea43b793685c2c1691be8d066a59181d9f77d2aab6ccc6a33"} Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.731299 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.731880 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.781682 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.897527 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.897599 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:33 crc kubenswrapper[4731]: I1129 07:12:33.944113 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:34 crc kubenswrapper[4731]: I1129 07:12:34.533539 4731 generic.go:334] "Generic (PLEG): container finished" podID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerID="ba747959269a7c4ea43b793685c2c1691be8d066a59181d9f77d2aab6ccc6a33" exitCode=0 Nov 29 07:12:34 crc kubenswrapper[4731]: I1129 07:12:34.533656 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shxb6" event={"ID":"406d98ce-84d9-4d8a-8567-9b82123cf323","Type":"ContainerDied","Data":"ba747959269a7c4ea43b793685c2c1691be8d066a59181d9f77d2aab6ccc6a33"} Nov 29 07:12:34 crc kubenswrapper[4731]: I1129 07:12:34.588919 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t7tn5" Nov 29 07:12:34 crc kubenswrapper[4731]: I1129 07:12:34.592051 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdtsp" Nov 29 07:12:35 crc kubenswrapper[4731]: I1129 07:12:35.547742 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shxb6" event={"ID":"406d98ce-84d9-4d8a-8567-9b82123cf323","Type":"ContainerStarted","Data":"9432ae2c19488d8d99c445ab27d505c1cd8f0680ce71fa51d4b9315dffd722c1"} Nov 29 07:12:35 crc kubenswrapper[4731]: I1129 07:12:35.569550 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-shxb6" podStartSLOduration=2.884195631 podStartE2EDuration="10.5695295s" podCreationTimestamp="2025-11-29 07:12:25 +0000 UTC" firstStartedPulling="2025-11-29 07:12:27.432598701 +0000 UTC m=+386.322959804" lastFinishedPulling="2025-11-29 07:12:35.11793257 +0000 UTC m=+394.008293673" observedRunningTime="2025-11-29 07:12:35.565717638 +0000 UTC m=+394.456078741" watchObservedRunningTime="2025-11-29 07:12:35.5695295 +0000 UTC m=+394.459890603" Nov 29 07:12:36 crc kubenswrapper[4731]: I1129 07:12:36.098619 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:36 crc kubenswrapper[4731]: I1129 07:12:36.098677 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:36 crc kubenswrapper[4731]: I1129 07:12:36.392359 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:36 crc kubenswrapper[4731]: I1129 07:12:36.393021 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:36 crc kubenswrapper[4731]: I1129 07:12:36.443545 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:36 crc kubenswrapper[4731]: I1129 07:12:36.596099 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qj69f" Nov 29 07:12:37 crc kubenswrapper[4731]: I1129 07:12:37.145429 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-shxb6" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="registry-server" probeResult="failure" output=< Nov 29 07:12:37 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 29 07:12:37 crc kubenswrapper[4731]: > Nov 29 07:12:38 crc kubenswrapper[4731]: I1129 07:12:38.753664 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-c68ls" Nov 29 07:12:38 crc kubenswrapper[4731]: I1129 07:12:38.817627 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8nrfn"] Nov 29 07:12:39 crc kubenswrapper[4731]: I1129 07:12:39.879755 4731 patch_prober.go:28] interesting pod/router-default-5444994796-2qd7z container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:12:39 crc kubenswrapper[4731]: I1129 07:12:39.879824 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-2qd7z" podUID="328a2fcf-7e85-49ad-849c-f32818b5cd87" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:12:46 crc kubenswrapper[4731]: I1129 07:12:46.143912 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:12:46 crc kubenswrapper[4731]: I1129 07:12:46.193741 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-shxb6" Nov 29 07:13:03 crc kubenswrapper[4731]: I1129 07:13:03.864630 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" podUID="cf2cdf59-237b-432e-9e41-c37078755275" containerName="registry" containerID="cri-o://ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47" gracePeriod=30 Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.259326 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.291698 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-trusted-ca\") pod \"cf2cdf59-237b-432e-9e41-c37078755275\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.291859 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf2cdf59-237b-432e-9e41-c37078755275-ca-trust-extracted\") pod \"cf2cdf59-237b-432e-9e41-c37078755275\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.292033 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cf2cdf59-237b-432e-9e41-c37078755275\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.292072 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-registry-certificates\") pod \"cf2cdf59-237b-432e-9e41-c37078755275\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.292099 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-registry-tls\") pod \"cf2cdf59-237b-432e-9e41-c37078755275\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.292123 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf2cdf59-237b-432e-9e41-c37078755275-installation-pull-secrets\") pod \"cf2cdf59-237b-432e-9e41-c37078755275\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.292201 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-bound-sa-token\") pod \"cf2cdf59-237b-432e-9e41-c37078755275\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.292226 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjbw6\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-kube-api-access-qjbw6\") pod \"cf2cdf59-237b-432e-9e41-c37078755275\" (UID: \"cf2cdf59-237b-432e-9e41-c37078755275\") " Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.292985 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cf2cdf59-237b-432e-9e41-c37078755275" (UID: "cf2cdf59-237b-432e-9e41-c37078755275"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.293710 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cf2cdf59-237b-432e-9e41-c37078755275" (UID: "cf2cdf59-237b-432e-9e41-c37078755275"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.300642 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cf2cdf59-237b-432e-9e41-c37078755275" (UID: "cf2cdf59-237b-432e-9e41-c37078755275"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.300822 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2cdf59-237b-432e-9e41-c37078755275-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cf2cdf59-237b-432e-9e41-c37078755275" (UID: "cf2cdf59-237b-432e-9e41-c37078755275"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.302980 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cf2cdf59-237b-432e-9e41-c37078755275" (UID: "cf2cdf59-237b-432e-9e41-c37078755275"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.303311 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cf2cdf59-237b-432e-9e41-c37078755275" (UID: "cf2cdf59-237b-432e-9e41-c37078755275"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.305517 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-kube-api-access-qjbw6" (OuterVolumeSpecName: "kube-api-access-qjbw6") pod "cf2cdf59-237b-432e-9e41-c37078755275" (UID: "cf2cdf59-237b-432e-9e41-c37078755275"). InnerVolumeSpecName "kube-api-access-qjbw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.313232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2cdf59-237b-432e-9e41-c37078755275-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cf2cdf59-237b-432e-9e41-c37078755275" (UID: "cf2cdf59-237b-432e-9e41-c37078755275"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.394098 4731 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf2cdf59-237b-432e-9e41-c37078755275-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.394203 4731 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.394221 4731 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.394232 4731 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf2cdf59-237b-432e-9e41-c37078755275-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.394241 4731 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.394250 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjbw6\" (UniqueName: \"kubernetes.io/projected/cf2cdf59-237b-432e-9e41-c37078755275-kube-api-access-qjbw6\") on node \"crc\" DevicePath \"\"" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.394258 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf2cdf59-237b-432e-9e41-c37078755275-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.728667 4731 generic.go:334] "Generic (PLEG): container finished" podID="cf2cdf59-237b-432e-9e41-c37078755275" containerID="ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47" exitCode=0 Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.728743 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" event={"ID":"cf2cdf59-237b-432e-9e41-c37078755275","Type":"ContainerDied","Data":"ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47"} Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.728809 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.728860 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8nrfn" event={"ID":"cf2cdf59-237b-432e-9e41-c37078755275","Type":"ContainerDied","Data":"8ed3b324caea16a1a377db600bac8964c59e33d3619e72944d2329524a2e2e6a"} Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.728889 4731 scope.go:117] "RemoveContainer" containerID="ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.752730 4731 scope.go:117] "RemoveContainer" containerID="ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47" Nov 29 07:13:04 crc kubenswrapper[4731]: E1129 07:13:04.754148 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47\": container with ID starting with ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47 not found: ID does not exist" containerID="ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.754270 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47"} err="failed to get container status \"ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47\": rpc error: code = NotFound desc = could not find container \"ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47\": container with ID starting with ad68e1aa243fe72d5a2cb36df2b62a0914e3c38c70932d242476a9e6e895cc47 not found: ID does not exist" Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.764126 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8nrfn"] Nov 29 07:13:04 crc kubenswrapper[4731]: I1129 07:13:04.772909 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8nrfn"] Nov 29 07:13:05 crc kubenswrapper[4731]: I1129 07:13:05.817515 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2cdf59-237b-432e-9e41-c37078755275" path="/var/lib/kubelet/pods/cf2cdf59-237b-432e-9e41-c37078755275/volumes" Nov 29 07:14:33 crc kubenswrapper[4731]: I1129 07:14:33.002786 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:14:33 crc kubenswrapper[4731]: I1129 07:14:33.005070 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.197041 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq"] Nov 29 07:15:00 crc kubenswrapper[4731]: E1129 07:15:00.198250 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2cdf59-237b-432e-9e41-c37078755275" containerName="registry" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.198269 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2cdf59-237b-432e-9e41-c37078755275" containerName="registry" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.198389 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2cdf59-237b-432e-9e41-c37078755275" containerName="registry" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.199026 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.201710 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.205175 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq"] Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.206219 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.361174 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2zt\" (UniqueName: \"kubernetes.io/projected/6d3389ba-bb37-48d3-b029-f6e492b6152a-kube-api-access-8b2zt\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.361299 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d3389ba-bb37-48d3-b029-f6e492b6152a-secret-volume\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.361334 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d3389ba-bb37-48d3-b029-f6e492b6152a-config-volume\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.463107 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2zt\" (UniqueName: \"kubernetes.io/projected/6d3389ba-bb37-48d3-b029-f6e492b6152a-kube-api-access-8b2zt\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.463200 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d3389ba-bb37-48d3-b029-f6e492b6152a-secret-volume\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.463228 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d3389ba-bb37-48d3-b029-f6e492b6152a-config-volume\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.464036 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d3389ba-bb37-48d3-b029-f6e492b6152a-config-volume\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.473673 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d3389ba-bb37-48d3-b029-f6e492b6152a-secret-volume\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.484099 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2zt\" (UniqueName: \"kubernetes.io/projected/6d3389ba-bb37-48d3-b029-f6e492b6152a-kube-api-access-8b2zt\") pod \"collect-profiles-29406675-lnkxq\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.521170 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:00 crc kubenswrapper[4731]: I1129 07:15:00.712155 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq"] Nov 29 07:15:01 crc kubenswrapper[4731]: I1129 07:15:01.467918 4731 generic.go:334] "Generic (PLEG): container finished" podID="6d3389ba-bb37-48d3-b029-f6e492b6152a" containerID="4f0acd0dd530dc72288b814e42cf3f3b431537d0f1e39a57371daf01b9dd95c8" exitCode=0 Nov 29 07:15:01 crc kubenswrapper[4731]: I1129 07:15:01.467976 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" event={"ID":"6d3389ba-bb37-48d3-b029-f6e492b6152a","Type":"ContainerDied","Data":"4f0acd0dd530dc72288b814e42cf3f3b431537d0f1e39a57371daf01b9dd95c8"} Nov 29 07:15:01 crc kubenswrapper[4731]: I1129 07:15:01.468016 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" event={"ID":"6d3389ba-bb37-48d3-b029-f6e492b6152a","Type":"ContainerStarted","Data":"6430a2c68d4136c604a122a479979a16d24cf1912fee14dce03bd2e8e3ad057a"} Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.692465 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.797186 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d3389ba-bb37-48d3-b029-f6e492b6152a-secret-volume\") pod \"6d3389ba-bb37-48d3-b029-f6e492b6152a\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.797267 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d3389ba-bb37-48d3-b029-f6e492b6152a-config-volume\") pod \"6d3389ba-bb37-48d3-b029-f6e492b6152a\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.797993 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3389ba-bb37-48d3-b029-f6e492b6152a-config-volume" (OuterVolumeSpecName: "config-volume") pod "6d3389ba-bb37-48d3-b029-f6e492b6152a" (UID: "6d3389ba-bb37-48d3-b029-f6e492b6152a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.798071 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b2zt\" (UniqueName: \"kubernetes.io/projected/6d3389ba-bb37-48d3-b029-f6e492b6152a-kube-api-access-8b2zt\") pod \"6d3389ba-bb37-48d3-b029-f6e492b6152a\" (UID: \"6d3389ba-bb37-48d3-b029-f6e492b6152a\") " Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.798319 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d3389ba-bb37-48d3-b029-f6e492b6152a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.803969 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3389ba-bb37-48d3-b029-f6e492b6152a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6d3389ba-bb37-48d3-b029-f6e492b6152a" (UID: "6d3389ba-bb37-48d3-b029-f6e492b6152a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.804848 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3389ba-bb37-48d3-b029-f6e492b6152a-kube-api-access-8b2zt" (OuterVolumeSpecName: "kube-api-access-8b2zt") pod "6d3389ba-bb37-48d3-b029-f6e492b6152a" (UID: "6d3389ba-bb37-48d3-b029-f6e492b6152a"). InnerVolumeSpecName "kube-api-access-8b2zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.900200 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b2zt\" (UniqueName: \"kubernetes.io/projected/6d3389ba-bb37-48d3-b029-f6e492b6152a-kube-api-access-8b2zt\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:02 crc kubenswrapper[4731]: I1129 07:15:02.900259 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d3389ba-bb37-48d3-b029-f6e492b6152a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:03 crc kubenswrapper[4731]: I1129 07:15:03.003013 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:15:03 crc kubenswrapper[4731]: I1129 07:15:03.003102 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:15:03 crc kubenswrapper[4731]: I1129 07:15:03.482096 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" event={"ID":"6d3389ba-bb37-48d3-b029-f6e492b6152a","Type":"ContainerDied","Data":"6430a2c68d4136c604a122a479979a16d24cf1912fee14dce03bd2e8e3ad057a"} Nov 29 07:15:03 crc kubenswrapper[4731]: I1129 07:15:03.482668 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6430a2c68d4136c604a122a479979a16d24cf1912fee14dce03bd2e8e3ad057a" Nov 29 07:15:03 crc kubenswrapper[4731]: I1129 07:15:03.482176 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq" Nov 29 07:15:33 crc kubenswrapper[4731]: I1129 07:15:33.002287 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:15:33 crc kubenswrapper[4731]: I1129 07:15:33.003021 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:15:33 crc kubenswrapper[4731]: I1129 07:15:33.003081 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:15:33 crc kubenswrapper[4731]: I1129 07:15:33.003786 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fabc326abb67dfad70071a4d4d3b7bda47a1d8464435cc73fe9ab0fd38194477"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:15:33 crc kubenswrapper[4731]: I1129 07:15:33.003860 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://fabc326abb67dfad70071a4d4d3b7bda47a1d8464435cc73fe9ab0fd38194477" gracePeriod=600 Nov 29 07:15:33 crc kubenswrapper[4731]: I1129 07:15:33.646917 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="fabc326abb67dfad70071a4d4d3b7bda47a1d8464435cc73fe9ab0fd38194477" exitCode=0 Nov 29 07:15:33 crc kubenswrapper[4731]: I1129 07:15:33.646993 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"fabc326abb67dfad70071a4d4d3b7bda47a1d8464435cc73fe9ab0fd38194477"} Nov 29 07:15:33 crc kubenswrapper[4731]: I1129 07:15:33.647328 4731 scope.go:117] "RemoveContainer" containerID="ca99db39a60fe421bcd1cc3436c5d0f329f6d5a18c512d839a8790b1dc8cf430" Nov 29 07:15:34 crc kubenswrapper[4731]: I1129 07:15:34.657172 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"e832e039d354d93ddba7480e0f594057afe8bf56de6979a0d3b6a9d2c9d3121e"} Nov 29 07:18:03 crc kubenswrapper[4731]: I1129 07:18:03.003151 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:18:03 crc kubenswrapper[4731]: I1129 07:18:03.003821 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.256054 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xrhpg"] Nov 29 07:18:23 crc kubenswrapper[4731]: E1129 07:18:23.264423 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3389ba-bb37-48d3-b029-f6e492b6152a" containerName="collect-profiles" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.264502 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3389ba-bb37-48d3-b029-f6e492b6152a" containerName="collect-profiles" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.265817 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3389ba-bb37-48d3-b029-f6e492b6152a" containerName="collect-profiles" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.266842 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-xrhpg" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.276822 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.277152 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.283873 4731 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pntnr" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.291687 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jtr6t"] Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.292751 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-jtr6t" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.296673 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xrhpg"] Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.304146 4731 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-stcs7" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.309656 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jtr6t"] Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.316786 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bkf2j"] Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.317974 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.320173 4731 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kmfw4" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.326768 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bkf2j"] Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.386223 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797qj\" (UniqueName: \"kubernetes.io/projected/9454756e-d310-48a9-9617-2469139ec742-kube-api-access-797qj\") pod \"cert-manager-cainjector-7f985d654d-xrhpg\" (UID: \"9454756e-d310-48a9-9617-2469139ec742\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xrhpg" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.386768 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2lfw\" (UniqueName: \"kubernetes.io/projected/46df25da-69e6-4ab6-b887-62892deeacfb-kube-api-access-j2lfw\") pod \"cert-manager-5b446d88c5-jtr6t\" (UID: \"46df25da-69e6-4ab6-b887-62892deeacfb\") " pod="cert-manager/cert-manager-5b446d88c5-jtr6t" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.387160 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw52z\" (UniqueName: \"kubernetes.io/projected/12a1a3de-47c9-481d-940e-02d320ee23f9-kube-api-access-xw52z\") pod \"cert-manager-webhook-5655c58dd6-bkf2j\" (UID: \"12a1a3de-47c9-481d-940e-02d320ee23f9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.489848 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw52z\" (UniqueName: \"kubernetes.io/projected/12a1a3de-47c9-481d-940e-02d320ee23f9-kube-api-access-xw52z\") pod \"cert-manager-webhook-5655c58dd6-bkf2j\" (UID: \"12a1a3de-47c9-481d-940e-02d320ee23f9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.490135 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797qj\" (UniqueName: \"kubernetes.io/projected/9454756e-d310-48a9-9617-2469139ec742-kube-api-access-797qj\") pod \"cert-manager-cainjector-7f985d654d-xrhpg\" (UID: \"9454756e-d310-48a9-9617-2469139ec742\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xrhpg" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.490185 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2lfw\" (UniqueName: \"kubernetes.io/projected/46df25da-69e6-4ab6-b887-62892deeacfb-kube-api-access-j2lfw\") pod \"cert-manager-5b446d88c5-jtr6t\" (UID: \"46df25da-69e6-4ab6-b887-62892deeacfb\") " pod="cert-manager/cert-manager-5b446d88c5-jtr6t" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.512097 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw52z\" (UniqueName: \"kubernetes.io/projected/12a1a3de-47c9-481d-940e-02d320ee23f9-kube-api-access-xw52z\") pod \"cert-manager-webhook-5655c58dd6-bkf2j\" (UID: \"12a1a3de-47c9-481d-940e-02d320ee23f9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.512111 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797qj\" (UniqueName: \"kubernetes.io/projected/9454756e-d310-48a9-9617-2469139ec742-kube-api-access-797qj\") pod \"cert-manager-cainjector-7f985d654d-xrhpg\" (UID: \"9454756e-d310-48a9-9617-2469139ec742\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xrhpg" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.512799 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2lfw\" (UniqueName: \"kubernetes.io/projected/46df25da-69e6-4ab6-b887-62892deeacfb-kube-api-access-j2lfw\") pod \"cert-manager-5b446d88c5-jtr6t\" (UID: \"46df25da-69e6-4ab6-b887-62892deeacfb\") " pod="cert-manager/cert-manager-5b446d88c5-jtr6t" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.625510 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-xrhpg" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.650245 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-jtr6t" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.666883 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.881177 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xrhpg"] Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.901859 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.921619 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jtr6t"] Nov 29 07:18:23 crc kubenswrapper[4731]: I1129 07:18:23.965133 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bkf2j"] Nov 29 07:18:23 crc kubenswrapper[4731]: W1129 07:18:23.970464 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12a1a3de_47c9_481d_940e_02d320ee23f9.slice/crio-c4238b4172c2d9131977c149d9ebf3abd16ceb7ef4228c72e068757a8bdf067a WatchSource:0}: Error finding container c4238b4172c2d9131977c149d9ebf3abd16ceb7ef4228c72e068757a8bdf067a: Status 404 returned error can't find the container with id c4238b4172c2d9131977c149d9ebf3abd16ceb7ef4228c72e068757a8bdf067a Nov 29 07:18:24 crc kubenswrapper[4731]: I1129 07:18:24.683209 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-xrhpg" event={"ID":"9454756e-d310-48a9-9617-2469139ec742","Type":"ContainerStarted","Data":"cda1cadf8bdbff918a9068bd5c6028101590dead9ea4b86b6adef2afd84a9d4f"} Nov 29 07:18:24 crc kubenswrapper[4731]: I1129 07:18:24.686432 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" event={"ID":"12a1a3de-47c9-481d-940e-02d320ee23f9","Type":"ContainerStarted","Data":"c4238b4172c2d9131977c149d9ebf3abd16ceb7ef4228c72e068757a8bdf067a"} Nov 29 07:18:24 crc kubenswrapper[4731]: I1129 07:18:24.688595 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jtr6t" event={"ID":"46df25da-69e6-4ab6-b887-62892deeacfb","Type":"ContainerStarted","Data":"078927ae765cc342427528b260b226dd6eec5bb5befb7e2497645ab3c42d904f"} Nov 29 07:18:29 crc kubenswrapper[4731]: I1129 07:18:29.723023 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" event={"ID":"12a1a3de-47c9-481d-940e-02d320ee23f9","Type":"ContainerStarted","Data":"329c1cf3fb30522992fbf7e2f204aa27a76cd3f7dc008250f6ea6d635fcb7f76"} Nov 29 07:18:29 crc kubenswrapper[4731]: I1129 07:18:29.723693 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" Nov 29 07:18:29 crc kubenswrapper[4731]: I1129 07:18:29.725257 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jtr6t" event={"ID":"46df25da-69e6-4ab6-b887-62892deeacfb","Type":"ContainerStarted","Data":"e2cf1f2ad832a562348d54f52439f69ceed2bdbbff9c53253cdae75893e2a513"} Nov 29 07:18:29 crc kubenswrapper[4731]: I1129 07:18:29.726979 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-xrhpg" event={"ID":"9454756e-d310-48a9-9617-2469139ec742","Type":"ContainerStarted","Data":"7017eb6f216c7c36fbfa2456216d976e454703dd38f6bc24f542c5a629101c75"} Nov 29 07:18:29 crc kubenswrapper[4731]: I1129 07:18:29.745004 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" podStartSLOduration=2.138508429 podStartE2EDuration="6.744981881s" podCreationTimestamp="2025-11-29 07:18:23 +0000 UTC" firstStartedPulling="2025-11-29 07:18:23.97572944 +0000 UTC m=+742.866090543" lastFinishedPulling="2025-11-29 07:18:28.582202892 +0000 UTC m=+747.472563995" observedRunningTime="2025-11-29 07:18:29.743832867 +0000 UTC m=+748.634193980" watchObservedRunningTime="2025-11-29 07:18:29.744981881 +0000 UTC m=+748.635342984" Nov 29 07:18:29 crc kubenswrapper[4731]: I1129 07:18:29.760213 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-xrhpg" podStartSLOduration=2.136226642 podStartE2EDuration="6.760184475s" podCreationTimestamp="2025-11-29 07:18:23 +0000 UTC" firstStartedPulling="2025-11-29 07:18:23.900438102 +0000 UTC m=+742.790799205" lastFinishedPulling="2025-11-29 07:18:28.524395935 +0000 UTC m=+747.414757038" observedRunningTime="2025-11-29 07:18:29.759212856 +0000 UTC m=+748.649573959" watchObservedRunningTime="2025-11-29 07:18:29.760184475 +0000 UTC m=+748.650545598" Nov 29 07:18:29 crc kubenswrapper[4731]: I1129 07:18:29.779060 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-jtr6t" podStartSLOduration=2.183602125 podStartE2EDuration="6.779029705s" podCreationTimestamp="2025-11-29 07:18:23 +0000 UTC" firstStartedPulling="2025-11-29 07:18:23.929462959 +0000 UTC m=+742.819824062" lastFinishedPulling="2025-11-29 07:18:28.524890539 +0000 UTC m=+747.415251642" observedRunningTime="2025-11-29 07:18:29.774800611 +0000 UTC m=+748.665161744" watchObservedRunningTime="2025-11-29 07:18:29.779029705 +0000 UTC m=+748.669390828" Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.002602 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.002673 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.671461 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-bkf2j" Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.861602 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4t5j"] Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.862093 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovn-controller" containerID="cri-o://6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a" gracePeriod=30 Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.862187 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="nbdb" containerID="cri-o://64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f" gracePeriod=30 Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.862270 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kube-rbac-proxy-node" containerID="cri-o://2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30" gracePeriod=30 Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.862310 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovn-acl-logging" containerID="cri-o://c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc" gracePeriod=30 Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.862307 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="northd" containerID="cri-o://37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f" gracePeriod=30 Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.862285 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca" gracePeriod=30 Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.862359 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="sbdb" containerID="cri-o://9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c" gracePeriod=30 Nov 29 07:18:33 crc kubenswrapper[4731]: I1129 07:18:33.907715 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" containerID="cri-o://7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691" gracePeriod=30 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.214640 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/3.log" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.217319 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovn-acl-logging/0.log" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.217982 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovn-controller/0.log" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.218596 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277149 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t8ngc"] Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277504 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="northd" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277544 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="northd" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277558 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277596 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277606 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277612 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277626 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="nbdb" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277632 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="nbdb" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277647 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovn-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277675 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovn-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277686 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovn-acl-logging" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277694 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovn-acl-logging" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277706 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277714 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277722 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="sbdb" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277728 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="sbdb" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277757 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kube-rbac-proxy-node" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277764 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kube-rbac-proxy-node" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277773 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kubecfg-setup" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277780 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kubecfg-setup" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.277790 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277797 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.277999 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278010 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovn-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278020 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278030 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278039 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="sbdb" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278069 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="nbdb" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278082 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovn-acl-logging" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278092 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="kube-rbac-proxy-node" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278100 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278107 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="northd" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.278247 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278255 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: E1129 07:18:34.278264 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278272 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278414 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.278428 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerName="ovnkube-controller" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.281299 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358101 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-systemd\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358177 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-kubelet\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358231 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-ovn\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358257 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-node-log\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358277 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-netns\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358305 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovn-node-metrics-cert\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358325 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-ovn-kubernetes\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358341 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-log-socket\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358352 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358384 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-etc-openvswitch\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358380 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358463 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-node-log" (OuterVolumeSpecName: "node-log") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358479 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-slash\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358507 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-systemd-units\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358542 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnvzl\" (UniqueName: \"kubernetes.io/projected/7d4585c4-ac4a-4268-b25e-47509c17cfe2-kube-api-access-rnvzl\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358610 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-var-lib-openvswitch\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358663 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-config\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358699 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-bin\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358725 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358750 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-netd\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358766 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-openvswitch\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358783 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-env-overrides\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358807 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-script-lib\") pod \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\" (UID: \"7d4585c4-ac4a-4268-b25e-47509c17cfe2\") " Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359033 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-env-overrides\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359067 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-cni-bin\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359096 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-etc-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359116 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-ovn\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359151 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-slash\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358447 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359208 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358500 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358515 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358516 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-log-socket" (OuterVolumeSpecName: "log-socket") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358539 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-slash" (OuterVolumeSpecName: "host-slash") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.358588 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359268 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359293 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359170 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovn-node-metrics-cert\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359649 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-log-socket\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359742 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-run-netns\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359873 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5htz\" (UniqueName: \"kubernetes.io/projected/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-kube-api-access-p5htz\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359875 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.359999 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360063 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360062 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360188 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360257 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-node-log\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360281 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-systemd-units\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360308 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovnkube-config\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360350 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-systemd\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360419 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-kubelet\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360461 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-cni-netd\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360511 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-run-ovn-kubernetes\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360580 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-var-lib-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360609 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovnkube-script-lib\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360717 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360730 4731 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360765 4731 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360778 4731 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360788 4731 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360798 4731 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360809 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360823 4731 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360831 4731 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360841 4731 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-node-log\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360853 4731 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360864 4731 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360875 4731 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-log-socket\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360887 4731 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360896 4731 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-host-slash\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360907 4731 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.360917 4731 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.366034 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.366106 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d4585c4-ac4a-4268-b25e-47509c17cfe2-kube-api-access-rnvzl" (OuterVolumeSpecName: "kube-api-access-rnvzl") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "kube-api-access-rnvzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.374248 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7d4585c4-ac4a-4268-b25e-47509c17cfe2" (UID: "7d4585c4-ac4a-4268-b25e-47509c17cfe2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462353 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462449 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-node-log\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462476 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-systemd-units\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462501 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovnkube-config\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462517 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-systemd\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462535 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-cni-netd\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462555 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-kubelet\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462599 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-run-ovn-kubernetes\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462621 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-var-lib-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462607 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-node-log\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462643 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovnkube-script-lib\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462768 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-env-overrides\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462817 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-cni-bin\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462847 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-etc-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462880 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-ovn\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462928 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-slash\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462954 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovn-node-metrics-cert\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.462994 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-log-socket\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463053 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-run-netns\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463138 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5htz\" (UniqueName: \"kubernetes.io/projected/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-kube-api-access-p5htz\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463200 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463294 4731 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d4585c4-ac4a-4268-b25e-47509c17cfe2-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463309 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d4585c4-ac4a-4268-b25e-47509c17cfe2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463321 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnvzl\" (UniqueName: \"kubernetes.io/projected/7d4585c4-ac4a-4268-b25e-47509c17cfe2-kube-api-access-rnvzl\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463359 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463465 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovnkube-script-lib\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463524 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-systemd-units\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463680 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-etc-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463730 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-kubelet\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463825 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-run-ovn-kubernetes\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463747 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-cni-bin\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463871 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-systemd\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463901 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-var-lib-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463905 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-cni-netd\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463923 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-env-overrides\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463945 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-log-socket\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463952 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-run-netns\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463928 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-ovn\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.464031 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovnkube-config\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.463700 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-host-slash\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.464279 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-run-openvswitch\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.467380 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-ovn-node-metrics-cert\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.482680 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5htz\" (UniqueName: \"kubernetes.io/projected/c8b2c3db-a1a2-4a90-961d-d3a18cffe67c-kube-api-access-p5htz\") pod \"ovnkube-node-t8ngc\" (UID: \"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.606197 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.764533 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"be026d5c032f6f3377425ff82b4a4fccaa38fb344f1b1b5ad0b6a82ad1971e65"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.767403 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/2.log" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.768167 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/1.log" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.768204 4731 generic.go:334] "Generic (PLEG): container finished" podID="5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8" containerID="7a94cd2b3571722a673cd8b315be00d962733b4fdc954fffd6cb25b7c577b0c4" exitCode=2 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.768262 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5rsbt" event={"ID":"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8","Type":"ContainerDied","Data":"7a94cd2b3571722a673cd8b315be00d962733b4fdc954fffd6cb25b7c577b0c4"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.768308 4731 scope.go:117] "RemoveContainer" containerID="bae9d331b627f3cb340763c8fae4df7b74979611e8643e081beaa89f127f9c86" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.768758 4731 scope.go:117] "RemoveContainer" containerID="7a94cd2b3571722a673cd8b315be00d962733b4fdc954fffd6cb25b7c577b0c4" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.774604 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovnkube-controller/3.log" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.778040 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovn-acl-logging/0.log" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.778911 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4t5j_7d4585c4-ac4a-4268-b25e-47509c17cfe2/ovn-controller/0.log" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.779434 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691" exitCode=0 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.779528 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c" exitCode=0 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.779625 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f" exitCode=0 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.779552 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.779693 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f" exitCode=0 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780598 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca" exitCode=0 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780626 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30" exitCode=0 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780641 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc" exitCode=143 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780653 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" containerID="6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a" exitCode=143 Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.779478 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780703 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780718 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780729 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780741 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780751 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780762 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780775 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780781 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780787 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780793 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780799 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780804 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780809 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780815 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780820 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780827 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780836 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780842 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780849 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780855 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780861 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780866 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780872 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780878 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780883 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780889 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780896 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780904 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780910 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780917 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780923 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780929 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780936 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780942 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780948 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780954 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780961 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780968 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4t5j" event={"ID":"7d4585c4-ac4a-4268-b25e-47509c17cfe2","Type":"ContainerDied","Data":"069e5a4a808e0afe3fbc3ba3fd78e91a237e6f0e24c0fe2ad992a6c2a40bc7c2"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780976 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780983 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780989 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.780995 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.781001 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.781006 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.781013 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.781018 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.781024 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.781029 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.831674 4731 scope.go:117] "RemoveContainer" containerID="7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.866320 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.871660 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4t5j"] Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.877703 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4t5j"] Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.905087 4731 scope.go:117] "RemoveContainer" containerID="9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.923119 4731 scope.go:117] "RemoveContainer" containerID="64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.941440 4731 scope.go:117] "RemoveContainer" containerID="37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.960425 4731 scope.go:117] "RemoveContainer" containerID="77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca" Nov 29 07:18:34 crc kubenswrapper[4731]: I1129 07:18:34.977599 4731 scope.go:117] "RemoveContainer" containerID="2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.012758 4731 scope.go:117] "RemoveContainer" containerID="c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.037808 4731 scope.go:117] "RemoveContainer" containerID="6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.059746 4731 scope.go:117] "RemoveContainer" containerID="0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.075472 4731 scope.go:117] "RemoveContainer" containerID="7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.076095 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": container with ID starting with 7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691 not found: ID does not exist" containerID="7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.076179 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} err="failed to get container status \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": rpc error: code = NotFound desc = could not find container \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": container with ID starting with 7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.076218 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.076718 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": container with ID starting with 90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6 not found: ID does not exist" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.076765 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} err="failed to get container status \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": rpc error: code = NotFound desc = could not find container \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": container with ID starting with 90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.076800 4731 scope.go:117] "RemoveContainer" containerID="9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.077106 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": container with ID starting with 9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c not found: ID does not exist" containerID="9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.077142 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} err="failed to get container status \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": rpc error: code = NotFound desc = could not find container \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": container with ID starting with 9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.077163 4731 scope.go:117] "RemoveContainer" containerID="64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.077446 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": container with ID starting with 64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f not found: ID does not exist" containerID="64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.077476 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} err="failed to get container status \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": rpc error: code = NotFound desc = could not find container \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": container with ID starting with 64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.077496 4731 scope.go:117] "RemoveContainer" containerID="37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.077880 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": container with ID starting with 37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f not found: ID does not exist" containerID="37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.077915 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} err="failed to get container status \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": rpc error: code = NotFound desc = could not find container \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": container with ID starting with 37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.077934 4731 scope.go:117] "RemoveContainer" containerID="77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.078262 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": container with ID starting with 77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca not found: ID does not exist" containerID="77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.078295 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} err="failed to get container status \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": rpc error: code = NotFound desc = could not find container \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": container with ID starting with 77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.078318 4731 scope.go:117] "RemoveContainer" containerID="2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.078624 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": container with ID starting with 2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30 not found: ID does not exist" containerID="2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.078667 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} err="failed to get container status \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": rpc error: code = NotFound desc = could not find container \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": container with ID starting with 2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.078684 4731 scope.go:117] "RemoveContainer" containerID="c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.079243 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": container with ID starting with c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc not found: ID does not exist" containerID="c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.079279 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} err="failed to get container status \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": rpc error: code = NotFound desc = could not find container \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": container with ID starting with c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.079308 4731 scope.go:117] "RemoveContainer" containerID="6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.079634 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": container with ID starting with 6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a not found: ID does not exist" containerID="6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.079675 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} err="failed to get container status \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": rpc error: code = NotFound desc = could not find container \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": container with ID starting with 6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.079697 4731 scope.go:117] "RemoveContainer" containerID="0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39" Nov 29 07:18:35 crc kubenswrapper[4731]: E1129 07:18:35.080008 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": container with ID starting with 0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39 not found: ID does not exist" containerID="0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.080041 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} err="failed to get container status \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": rpc error: code = NotFound desc = could not find container \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": container with ID starting with 0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.080060 4731 scope.go:117] "RemoveContainer" containerID="7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.080360 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} err="failed to get container status \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": rpc error: code = NotFound desc = could not find container \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": container with ID starting with 7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.080389 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.080771 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} err="failed to get container status \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": rpc error: code = NotFound desc = could not find container \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": container with ID starting with 90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.080801 4731 scope.go:117] "RemoveContainer" containerID="9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.081089 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} err="failed to get container status \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": rpc error: code = NotFound desc = could not find container \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": container with ID starting with 9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.081117 4731 scope.go:117] "RemoveContainer" containerID="64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.081355 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} err="failed to get container status \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": rpc error: code = NotFound desc = could not find container \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": container with ID starting with 64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.081379 4731 scope.go:117] "RemoveContainer" containerID="37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.081708 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} err="failed to get container status \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": rpc error: code = NotFound desc = could not find container \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": container with ID starting with 37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.081773 4731 scope.go:117] "RemoveContainer" containerID="77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.082102 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} err="failed to get container status \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": rpc error: code = NotFound desc = could not find container \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": container with ID starting with 77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.082126 4731 scope.go:117] "RemoveContainer" containerID="2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.082484 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} err="failed to get container status \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": rpc error: code = NotFound desc = could not find container \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": container with ID starting with 2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.082556 4731 scope.go:117] "RemoveContainer" containerID="c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.082968 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} err="failed to get container status \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": rpc error: code = NotFound desc = could not find container \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": container with ID starting with c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.082997 4731 scope.go:117] "RemoveContainer" containerID="6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.083246 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} err="failed to get container status \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": rpc error: code = NotFound desc = could not find container \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": container with ID starting with 6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.083272 4731 scope.go:117] "RemoveContainer" containerID="0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.083545 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} err="failed to get container status \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": rpc error: code = NotFound desc = could not find container \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": container with ID starting with 0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.083586 4731 scope.go:117] "RemoveContainer" containerID="7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.083891 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} err="failed to get container status \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": rpc error: code = NotFound desc = could not find container \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": container with ID starting with 7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.083922 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.084294 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} err="failed to get container status \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": rpc error: code = NotFound desc = could not find container \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": container with ID starting with 90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.084318 4731 scope.go:117] "RemoveContainer" containerID="9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.084726 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} err="failed to get container status \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": rpc error: code = NotFound desc = could not find container \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": container with ID starting with 9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.084751 4731 scope.go:117] "RemoveContainer" containerID="64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.085107 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} err="failed to get container status \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": rpc error: code = NotFound desc = could not find container \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": container with ID starting with 64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.085129 4731 scope.go:117] "RemoveContainer" containerID="37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.085410 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} err="failed to get container status \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": rpc error: code = NotFound desc = could not find container \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": container with ID starting with 37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.085434 4731 scope.go:117] "RemoveContainer" containerID="77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.085819 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} err="failed to get container status \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": rpc error: code = NotFound desc = could not find container \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": container with ID starting with 77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.085841 4731 scope.go:117] "RemoveContainer" containerID="2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.086149 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} err="failed to get container status \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": rpc error: code = NotFound desc = could not find container \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": container with ID starting with 2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.086169 4731 scope.go:117] "RemoveContainer" containerID="c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.086400 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} err="failed to get container status \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": rpc error: code = NotFound desc = could not find container \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": container with ID starting with c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.086422 4731 scope.go:117] "RemoveContainer" containerID="6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.087031 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} err="failed to get container status \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": rpc error: code = NotFound desc = could not find container \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": container with ID starting with 6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.087062 4731 scope.go:117] "RemoveContainer" containerID="0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.087367 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} err="failed to get container status \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": rpc error: code = NotFound desc = could not find container \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": container with ID starting with 0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.087393 4731 scope.go:117] "RemoveContainer" containerID="7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.087688 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691"} err="failed to get container status \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": rpc error: code = NotFound desc = could not find container \"7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691\": container with ID starting with 7e89c5211fc1a0b3e3a69a2b868401a20a643f1f5797275d756d27ee967e8691 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.087714 4731 scope.go:117] "RemoveContainer" containerID="90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.088110 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6"} err="failed to get container status \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": rpc error: code = NotFound desc = could not find container \"90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6\": container with ID starting with 90ceaa53a40826a30748a2057fff56bcb3597cf73ed5e0a18bc45d98e0e6c1b6 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.088136 4731 scope.go:117] "RemoveContainer" containerID="9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.088513 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c"} err="failed to get container status \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": rpc error: code = NotFound desc = could not find container \"9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c\": container with ID starting with 9a6b60ccbba0152ec034f29ebd79fc5f35ee869bcbec1983425f40df246c463c not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.088551 4731 scope.go:117] "RemoveContainer" containerID="64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.088816 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f"} err="failed to get container status \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": rpc error: code = NotFound desc = could not find container \"64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f\": container with ID starting with 64a3bcebacca0fb7288976faa2ee468f1496339cc6deba0ce36b29d0537d493f not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.088838 4731 scope.go:117] "RemoveContainer" containerID="37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.089112 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f"} err="failed to get container status \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": rpc error: code = NotFound desc = could not find container \"37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f\": container with ID starting with 37e2099f7c7618e48c7ca7d3a76e9e63e4af75068a72efb7b27b3565c270787f not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.089139 4731 scope.go:117] "RemoveContainer" containerID="77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.089448 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca"} err="failed to get container status \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": rpc error: code = NotFound desc = could not find container \"77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca\": container with ID starting with 77f2885a107dcb4bca4dd71970f28cd5c82abd26240a9b47a25bab79d7a6cfca not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.089488 4731 scope.go:117] "RemoveContainer" containerID="2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.089904 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30"} err="failed to get container status \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": rpc error: code = NotFound desc = could not find container \"2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30\": container with ID starting with 2c34e10091af35202da4f97804418c6232aaccd75303ec402ed3f52de19eba30 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.089928 4731 scope.go:117] "RemoveContainer" containerID="c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.090202 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc"} err="failed to get container status \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": rpc error: code = NotFound desc = could not find container \"c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc\": container with ID starting with c7ec6b9ad02e848a0669ac486da3f2ccc8ca363136fcd40618c45f16e64388bc not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.090234 4731 scope.go:117] "RemoveContainer" containerID="6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.090627 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a"} err="failed to get container status \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": rpc error: code = NotFound desc = could not find container \"6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a\": container with ID starting with 6fd71ed13728cbf7ab7ade3184cf8b579a598046c6cccdd7796f757aa7fc1c0a not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.090673 4731 scope.go:117] "RemoveContainer" containerID="0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.091124 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39"} err="failed to get container status \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": rpc error: code = NotFound desc = could not find container \"0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39\": container with ID starting with 0b769a8a935c8eb447fe825cfb1b4c91be5abbe5992798442096dd3c00ce7c39 not found: ID does not exist" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.791069 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5rsbt_5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8/kube-multus/2.log" Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.791175 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5rsbt" event={"ID":"5b1c5c4b-163c-4f54-bf6a-9de0e7619fb8","Type":"ContainerStarted","Data":"e091974b43a4af3ccae663eecc6c0ebfbaf391dafc23c482435984fa1f1c498e"} Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.794235 4731 generic.go:334] "Generic (PLEG): container finished" podID="c8b2c3db-a1a2-4a90-961d-d3a18cffe67c" containerID="7f825e0e22eb0de400f8c861acba21e57802bac10845565b049cdcd077e1f3d4" exitCode=0 Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.794272 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerDied","Data":"7f825e0e22eb0de400f8c861acba21e57802bac10845565b049cdcd077e1f3d4"} Nov 29 07:18:35 crc kubenswrapper[4731]: I1129 07:18:35.825304 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d4585c4-ac4a-4268-b25e-47509c17cfe2" path="/var/lib/kubelet/pods/7d4585c4-ac4a-4268-b25e-47509c17cfe2/volumes" Nov 29 07:18:36 crc kubenswrapper[4731]: I1129 07:18:36.802336 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"e5db747309dc5bd204fbad664e28a27de317c4ae0cf7e1b9eae9b75f9426fbd9"} Nov 29 07:18:36 crc kubenswrapper[4731]: I1129 07:18:36.802715 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"6c2545c570e3e303b1f911327486a7490fa3c6797b161df6fb7922795f7bd4bb"} Nov 29 07:18:37 crc kubenswrapper[4731]: I1129 07:18:37.818735 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"0927fe60fdac1c9be47eb6fb1c64684716ebe73ea971df77e7e250731cf0138a"} Nov 29 07:18:37 crc kubenswrapper[4731]: I1129 07:18:37.819073 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"2cf8c3bb4dcb412661e1edf90d52a0324ce35cdc02c066556b1873a7a0ed98be"} Nov 29 07:18:37 crc kubenswrapper[4731]: I1129 07:18:37.819083 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"e0b582b45d4e3dca28c8250e4ec8816c9157c8b9e28a53ed4517377dc1ba25c9"} Nov 29 07:18:37 crc kubenswrapper[4731]: I1129 07:18:37.819092 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"cd443fc729c1490550696eb3558db04f470501d42e9526718c99164a530458fd"} Nov 29 07:18:40 crc kubenswrapper[4731]: I1129 07:18:40.850539 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"2e065f1b54c69da4ac12904f054edeb8a2f591dcd0590d1d66fbd830ab5ead5f"} Nov 29 07:18:43 crc kubenswrapper[4731]: I1129 07:18:43.874892 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" event={"ID":"c8b2c3db-a1a2-4a90-961d-d3a18cffe67c","Type":"ContainerStarted","Data":"c766882167f7d7932e89594c3a37b5efb694747ccf6ba27ee55560ed88771055"} Nov 29 07:18:43 crc kubenswrapper[4731]: I1129 07:18:43.876268 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:43 crc kubenswrapper[4731]: I1129 07:18:43.876605 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:43 crc kubenswrapper[4731]: I1129 07:18:43.876695 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:43 crc kubenswrapper[4731]: I1129 07:18:43.908713 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:43 crc kubenswrapper[4731]: I1129 07:18:43.913937 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:18:43 crc kubenswrapper[4731]: I1129 07:18:43.916065 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" podStartSLOduration=9.916037607 podStartE2EDuration="9.916037607s" podCreationTimestamp="2025-11-29 07:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:18:43.912932957 +0000 UTC m=+762.803294090" watchObservedRunningTime="2025-11-29 07:18:43.916037607 +0000 UTC m=+762.806398710" Nov 29 07:18:52 crc kubenswrapper[4731]: I1129 07:18:52.786177 4731 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.002818 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.003671 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.003739 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.004592 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e832e039d354d93ddba7480e0f594057afe8bf56de6979a0d3b6a9d2c9d3121e"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.004675 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://e832e039d354d93ddba7480e0f594057afe8bf56de6979a0d3b6a9d2c9d3121e" gracePeriod=600 Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.994549 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="e832e039d354d93ddba7480e0f594057afe8bf56de6979a0d3b6a9d2c9d3121e" exitCode=0 Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.994597 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"e832e039d354d93ddba7480e0f594057afe8bf56de6979a0d3b6a9d2c9d3121e"} Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.995224 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"f623b0b449aeef3aba408365a10d9b3a882a155e1db4e4fae2a31dd92abc20ca"} Nov 29 07:19:03 crc kubenswrapper[4731]: I1129 07:19:03.995253 4731 scope.go:117] "RemoveContainer" containerID="fabc326abb67dfad70071a4d4d3b7bda47a1d8464435cc73fe9ab0fd38194477" Nov 29 07:19:04 crc kubenswrapper[4731]: I1129 07:19:04.681849 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t8ngc" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.167013 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b"] Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.168942 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.171544 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.186542 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b"] Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.305700 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.305764 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsz5\" (UniqueName: \"kubernetes.io/projected/e28aeb94-691b-4374-8a64-c8ea4831a139-kube-api-access-9wsz5\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.305976 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.407278 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.407345 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wsz5\" (UniqueName: \"kubernetes.io/projected/e28aeb94-691b-4374-8a64-c8ea4831a139-kube-api-access-9wsz5\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.407417 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.407869 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.407952 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.436132 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wsz5\" (UniqueName: \"kubernetes.io/projected/e28aeb94-691b-4374-8a64-c8ea4831a139-kube-api-access-9wsz5\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.488391 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:13 crc kubenswrapper[4731]: I1129 07:19:13.902597 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b"] Nov 29 07:19:13 crc kubenswrapper[4731]: W1129 07:19:13.915981 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode28aeb94_691b_4374_8a64_c8ea4831a139.slice/crio-8ac547e460f7324b7d5f6a4c3460aa8bf951a3c853e4a08323fe75abcd8ecc45 WatchSource:0}: Error finding container 8ac547e460f7324b7d5f6a4c3460aa8bf951a3c853e4a08323fe75abcd8ecc45: Status 404 returned error can't find the container with id 8ac547e460f7324b7d5f6a4c3460aa8bf951a3c853e4a08323fe75abcd8ecc45 Nov 29 07:19:14 crc kubenswrapper[4731]: I1129 07:19:14.070467 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" event={"ID":"e28aeb94-691b-4374-8a64-c8ea4831a139","Type":"ContainerStarted","Data":"8ac547e460f7324b7d5f6a4c3460aa8bf951a3c853e4a08323fe75abcd8ecc45"} Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.078803 4731 generic.go:334] "Generic (PLEG): container finished" podID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerID="def3ff48d56f34576c18d5cc7563f0de74aefa7042f774e5c10c272a154f0437" exitCode=0 Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.078918 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" event={"ID":"e28aeb94-691b-4374-8a64-c8ea4831a139","Type":"ContainerDied","Data":"def3ff48d56f34576c18d5cc7563f0de74aefa7042f774e5c10c272a154f0437"} Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.480235 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kj7h7"] Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.481677 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.487891 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kj7h7"] Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.536827 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-utilities\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.536913 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tklhp\" (UniqueName: \"kubernetes.io/projected/6bdf950b-9375-4f11-97e5-949a1aa230e1-kube-api-access-tklhp\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.536952 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-catalog-content\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.637987 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tklhp\" (UniqueName: \"kubernetes.io/projected/6bdf950b-9375-4f11-97e5-949a1aa230e1-kube-api-access-tklhp\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.638062 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-catalog-content\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.638140 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-utilities\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.639371 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-catalog-content\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.639631 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-utilities\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.661733 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tklhp\" (UniqueName: \"kubernetes.io/projected/6bdf950b-9375-4f11-97e5-949a1aa230e1-kube-api-access-tklhp\") pod \"redhat-operators-kj7h7\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:15 crc kubenswrapper[4731]: I1129 07:19:15.807385 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:16 crc kubenswrapper[4731]: I1129 07:19:16.018675 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kj7h7"] Nov 29 07:19:16 crc kubenswrapper[4731]: W1129 07:19:16.027582 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bdf950b_9375_4f11_97e5_949a1aa230e1.slice/crio-8bd01f007fb25c5bc5b95741f7768a3ede3ebcd6b4e257aec5d89adff227d352 WatchSource:0}: Error finding container 8bd01f007fb25c5bc5b95741f7768a3ede3ebcd6b4e257aec5d89adff227d352: Status 404 returned error can't find the container with id 8bd01f007fb25c5bc5b95741f7768a3ede3ebcd6b4e257aec5d89adff227d352 Nov 29 07:19:16 crc kubenswrapper[4731]: I1129 07:19:16.091042 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj7h7" event={"ID":"6bdf950b-9375-4f11-97e5-949a1aa230e1","Type":"ContainerStarted","Data":"8bd01f007fb25c5bc5b95741f7768a3ede3ebcd6b4e257aec5d89adff227d352"} Nov 29 07:19:17 crc kubenswrapper[4731]: I1129 07:19:17.099396 4731 generic.go:334] "Generic (PLEG): container finished" podID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerID="4f1fe0d9dab3accedf1b1ed050954fdf34dc2474b6cd65aa23a44aaaa5778531" exitCode=0 Nov 29 07:19:17 crc kubenswrapper[4731]: I1129 07:19:17.099520 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" event={"ID":"e28aeb94-691b-4374-8a64-c8ea4831a139","Type":"ContainerDied","Data":"4f1fe0d9dab3accedf1b1ed050954fdf34dc2474b6cd65aa23a44aaaa5778531"} Nov 29 07:19:17 crc kubenswrapper[4731]: I1129 07:19:17.102053 4731 generic.go:334] "Generic (PLEG): container finished" podID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerID="4a7718a565c7251a53f53e7cab697f567128e0b1988cda8bea8422f1a01f494f" exitCode=0 Nov 29 07:19:17 crc kubenswrapper[4731]: I1129 07:19:17.102102 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj7h7" event={"ID":"6bdf950b-9375-4f11-97e5-949a1aa230e1","Type":"ContainerDied","Data":"4a7718a565c7251a53f53e7cab697f567128e0b1988cda8bea8422f1a01f494f"} Nov 29 07:19:18 crc kubenswrapper[4731]: I1129 07:19:18.116041 4731 generic.go:334] "Generic (PLEG): container finished" podID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerID="6f57e0925eb7840d09c4676597543b38111033e811b47ff1cb4aed30eb17fc87" exitCode=0 Nov 29 07:19:18 crc kubenswrapper[4731]: I1129 07:19:18.116114 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" event={"ID":"e28aeb94-691b-4374-8a64-c8ea4831a139","Type":"ContainerDied","Data":"6f57e0925eb7840d09c4676597543b38111033e811b47ff1cb4aed30eb17fc87"} Nov 29 07:19:18 crc kubenswrapper[4731]: I1129 07:19:18.120728 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj7h7" event={"ID":"6bdf950b-9375-4f11-97e5-949a1aa230e1","Type":"ContainerStarted","Data":"23fbbac89783fe95c525f3a9166f70443f6e187397c6122bb19fa461e9411e8c"} Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.129534 4731 generic.go:334] "Generic (PLEG): container finished" podID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerID="23fbbac89783fe95c525f3a9166f70443f6e187397c6122bb19fa461e9411e8c" exitCode=0 Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.129657 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj7h7" event={"ID":"6bdf950b-9375-4f11-97e5-949a1aa230e1","Type":"ContainerDied","Data":"23fbbac89783fe95c525f3a9166f70443f6e187397c6122bb19fa461e9411e8c"} Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.405242 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.491020 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-util\") pod \"e28aeb94-691b-4374-8a64-c8ea4831a139\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.491136 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-bundle\") pod \"e28aeb94-691b-4374-8a64-c8ea4831a139\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.491237 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wsz5\" (UniqueName: \"kubernetes.io/projected/e28aeb94-691b-4374-8a64-c8ea4831a139-kube-api-access-9wsz5\") pod \"e28aeb94-691b-4374-8a64-c8ea4831a139\" (UID: \"e28aeb94-691b-4374-8a64-c8ea4831a139\") " Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.491728 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-bundle" (OuterVolumeSpecName: "bundle") pod "e28aeb94-691b-4374-8a64-c8ea4831a139" (UID: "e28aeb94-691b-4374-8a64-c8ea4831a139"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.498758 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28aeb94-691b-4374-8a64-c8ea4831a139-kube-api-access-9wsz5" (OuterVolumeSpecName: "kube-api-access-9wsz5") pod "e28aeb94-691b-4374-8a64-c8ea4831a139" (UID: "e28aeb94-691b-4374-8a64-c8ea4831a139"). InnerVolumeSpecName "kube-api-access-9wsz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.501343 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-util" (OuterVolumeSpecName: "util") pod "e28aeb94-691b-4374-8a64-c8ea4831a139" (UID: "e28aeb94-691b-4374-8a64-c8ea4831a139"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.592796 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wsz5\" (UniqueName: \"kubernetes.io/projected/e28aeb94-691b-4374-8a64-c8ea4831a139-kube-api-access-9wsz5\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.592833 4731 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:19 crc kubenswrapper[4731]: I1129 07:19:19.592845 4731 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28aeb94-691b-4374-8a64-c8ea4831a139-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:20 crc kubenswrapper[4731]: I1129 07:19:20.137718 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" event={"ID":"e28aeb94-691b-4374-8a64-c8ea4831a139","Type":"ContainerDied","Data":"8ac547e460f7324b7d5f6a4c3460aa8bf951a3c853e4a08323fe75abcd8ecc45"} Nov 29 07:19:20 crc kubenswrapper[4731]: I1129 07:19:20.137778 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac547e460f7324b7d5f6a4c3460aa8bf951a3c853e4a08323fe75abcd8ecc45" Nov 29 07:19:20 crc kubenswrapper[4731]: I1129 07:19:20.137817 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b" Nov 29 07:19:21 crc kubenswrapper[4731]: I1129 07:19:21.146661 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj7h7" event={"ID":"6bdf950b-9375-4f11-97e5-949a1aa230e1","Type":"ContainerStarted","Data":"a7d70f1103192fc01d2944b71fedeaa1e67e454af3e3b061611f97e18d06fcd9"} Nov 29 07:19:21 crc kubenswrapper[4731]: I1129 07:19:21.177169 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kj7h7" podStartSLOduration=3.104302799 podStartE2EDuration="6.177142537s" podCreationTimestamp="2025-11-29 07:19:15 +0000 UTC" firstStartedPulling="2025-11-29 07:19:17.110501532 +0000 UTC m=+796.000862655" lastFinishedPulling="2025-11-29 07:19:20.18334129 +0000 UTC m=+799.073702393" observedRunningTime="2025-11-29 07:19:21.172060989 +0000 UTC m=+800.062422112" watchObservedRunningTime="2025-11-29 07:19:21.177142537 +0000 UTC m=+800.067503660" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.442132 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54"] Nov 29 07:19:23 crc kubenswrapper[4731]: E1129 07:19:23.442874 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerName="extract" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.442897 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerName="extract" Nov 29 07:19:23 crc kubenswrapper[4731]: E1129 07:19:23.442924 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerName="util" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.442933 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerName="util" Nov 29 07:19:23 crc kubenswrapper[4731]: E1129 07:19:23.442956 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerName="pull" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.442964 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerName="pull" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.443072 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28aeb94-691b-4374-8a64-c8ea4831a139" containerName="extract" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.443647 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.447413 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.447812 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-sdg54" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.448007 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.481460 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54"] Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.538763 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kbj\" (UniqueName: \"kubernetes.io/projected/b2960a18-bc73-4625-8853-b433d22cc0ee-kube-api-access-67kbj\") pod \"nmstate-operator-5b5b58f5c8-m2k54\" (UID: \"b2960a18-bc73-4625-8853-b433d22cc0ee\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.640263 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67kbj\" (UniqueName: \"kubernetes.io/projected/b2960a18-bc73-4625-8853-b433d22cc0ee-kube-api-access-67kbj\") pod \"nmstate-operator-5b5b58f5c8-m2k54\" (UID: \"b2960a18-bc73-4625-8853-b433d22cc0ee\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.664534 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67kbj\" (UniqueName: \"kubernetes.io/projected/b2960a18-bc73-4625-8853-b433d22cc0ee-kube-api-access-67kbj\") pod \"nmstate-operator-5b5b58f5c8-m2k54\" (UID: \"b2960a18-bc73-4625-8853-b433d22cc0ee\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.761777 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54" Nov 29 07:19:23 crc kubenswrapper[4731]: I1129 07:19:23.977703 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54"] Nov 29 07:19:23 crc kubenswrapper[4731]: W1129 07:19:23.984656 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2960a18_bc73_4625_8853_b433d22cc0ee.slice/crio-528ef6438b5f9d080caf84eb03bd0df23bfb2170c2bcc06670e0ab2feac1654e WatchSource:0}: Error finding container 528ef6438b5f9d080caf84eb03bd0df23bfb2170c2bcc06670e0ab2feac1654e: Status 404 returned error can't find the container with id 528ef6438b5f9d080caf84eb03bd0df23bfb2170c2bcc06670e0ab2feac1654e Nov 29 07:19:24 crc kubenswrapper[4731]: I1129 07:19:24.167150 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54" event={"ID":"b2960a18-bc73-4625-8853-b433d22cc0ee","Type":"ContainerStarted","Data":"528ef6438b5f9d080caf84eb03bd0df23bfb2170c2bcc06670e0ab2feac1654e"} Nov 29 07:19:25 crc kubenswrapper[4731]: I1129 07:19:25.814759 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:25 crc kubenswrapper[4731]: I1129 07:19:25.815175 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:25 crc kubenswrapper[4731]: I1129 07:19:25.854960 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:26 crc kubenswrapper[4731]: I1129 07:19:26.219861 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:27 crc kubenswrapper[4731]: I1129 07:19:27.187759 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54" event={"ID":"b2960a18-bc73-4625-8853-b433d22cc0ee","Type":"ContainerStarted","Data":"06f59cd5a18e284893aad2654b82e8d1ba681e9b615c039cacc3935378bcfe41"} Nov 29 07:19:27 crc kubenswrapper[4731]: I1129 07:19:27.210509 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-m2k54" podStartSLOduration=1.276746578 podStartE2EDuration="4.210486527s" podCreationTimestamp="2025-11-29 07:19:23 +0000 UTC" firstStartedPulling="2025-11-29 07:19:23.986714072 +0000 UTC m=+802.877075175" lastFinishedPulling="2025-11-29 07:19:26.920454021 +0000 UTC m=+805.810815124" observedRunningTime="2025-11-29 07:19:27.206392637 +0000 UTC m=+806.096753750" watchObservedRunningTime="2025-11-29 07:19:27.210486527 +0000 UTC m=+806.100847630" Nov 29 07:19:28 crc kubenswrapper[4731]: I1129 07:19:28.460242 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kj7h7"] Nov 29 07:19:28 crc kubenswrapper[4731]: I1129 07:19:28.460646 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kj7h7" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerName="registry-server" containerID="cri-o://a7d70f1103192fc01d2944b71fedeaa1e67e454af3e3b061611f97e18d06fcd9" gracePeriod=2 Nov 29 07:19:30 crc kubenswrapper[4731]: I1129 07:19:30.215556 4731 generic.go:334] "Generic (PLEG): container finished" podID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerID="a7d70f1103192fc01d2944b71fedeaa1e67e454af3e3b061611f97e18d06fcd9" exitCode=0 Nov 29 07:19:30 crc kubenswrapper[4731]: I1129 07:19:30.215629 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj7h7" event={"ID":"6bdf950b-9375-4f11-97e5-949a1aa230e1","Type":"ContainerDied","Data":"a7d70f1103192fc01d2944b71fedeaa1e67e454af3e3b061611f97e18d06fcd9"} Nov 29 07:19:30 crc kubenswrapper[4731]: I1129 07:19:30.986975 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.145908 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-catalog-content\") pod \"6bdf950b-9375-4f11-97e5-949a1aa230e1\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.146024 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tklhp\" (UniqueName: \"kubernetes.io/projected/6bdf950b-9375-4f11-97e5-949a1aa230e1-kube-api-access-tklhp\") pod \"6bdf950b-9375-4f11-97e5-949a1aa230e1\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.146066 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-utilities\") pod \"6bdf950b-9375-4f11-97e5-949a1aa230e1\" (UID: \"6bdf950b-9375-4f11-97e5-949a1aa230e1\") " Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.147206 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-utilities" (OuterVolumeSpecName: "utilities") pod "6bdf950b-9375-4f11-97e5-949a1aa230e1" (UID: "6bdf950b-9375-4f11-97e5-949a1aa230e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.152244 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bdf950b-9375-4f11-97e5-949a1aa230e1-kube-api-access-tklhp" (OuterVolumeSpecName: "kube-api-access-tklhp") pod "6bdf950b-9375-4f11-97e5-949a1aa230e1" (UID: "6bdf950b-9375-4f11-97e5-949a1aa230e1"). InnerVolumeSpecName "kube-api-access-tklhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.227541 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj7h7" event={"ID":"6bdf950b-9375-4f11-97e5-949a1aa230e1","Type":"ContainerDied","Data":"8bd01f007fb25c5bc5b95741f7768a3ede3ebcd6b4e257aec5d89adff227d352"} Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.227647 4731 scope.go:117] "RemoveContainer" containerID="a7d70f1103192fc01d2944b71fedeaa1e67e454af3e3b061611f97e18d06fcd9" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.227713 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj7h7" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.248201 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tklhp\" (UniqueName: \"kubernetes.io/projected/6bdf950b-9375-4f11-97e5-949a1aa230e1-kube-api-access-tklhp\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.248253 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.250412 4731 scope.go:117] "RemoveContainer" containerID="23fbbac89783fe95c525f3a9166f70443f6e187397c6122bb19fa461e9411e8c" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.251782 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bdf950b-9375-4f11-97e5-949a1aa230e1" (UID: "6bdf950b-9375-4f11-97e5-949a1aa230e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.269174 4731 scope.go:117] "RemoveContainer" containerID="4a7718a565c7251a53f53e7cab697f567128e0b1988cda8bea8422f1a01f494f" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.349987 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bdf950b-9375-4f11-97e5-949a1aa230e1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.566548 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kj7h7"] Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.569429 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kj7h7"] Nov 29 07:19:31 crc kubenswrapper[4731]: I1129 07:19:31.815520 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" path="/var/lib/kubelet/pods/6bdf950b-9375-4f11-97e5-949a1aa230e1/volumes" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.418684 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z"] Nov 29 07:19:33 crc kubenswrapper[4731]: E1129 07:19:33.420449 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerName="extract-utilities" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.420593 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerName="extract-utilities" Nov 29 07:19:33 crc kubenswrapper[4731]: E1129 07:19:33.420727 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerName="registry-server" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.420846 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerName="registry-server" Nov 29 07:19:33 crc kubenswrapper[4731]: E1129 07:19:33.420947 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerName="extract-content" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.421023 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerName="extract-content" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.421229 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bdf950b-9375-4f11-97e5-949a1aa230e1" containerName="registry-server" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.422277 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.424490 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g"] Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.425111 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-kq2f8" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.425498 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.426556 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.430809 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z"] Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.445732 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g"] Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.467470 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qc6n5"] Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.468558 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.484276 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-ovs-socket\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.484361 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mct57\" (UniqueName: \"kubernetes.io/projected/131ea2bb-55cd-4f14-aa33-7600dc569c3f-kube-api-access-mct57\") pod \"nmstate-metrics-7f946cbc9-7rj8z\" (UID: \"131ea2bb-55cd-4f14-aa33-7600dc569c3f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.484397 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-nmstate-lock\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.484439 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-nlx7g\" (UID: \"5f1f2d59-f67c-47aa-b66a-84b647b9f52a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.484546 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5trb\" (UniqueName: \"kubernetes.io/projected/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-kube-api-access-x5trb\") pod \"nmstate-webhook-5f6d4c5ccb-nlx7g\" (UID: \"5f1f2d59-f67c-47aa-b66a-84b647b9f52a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.484670 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sv72\" (UniqueName: \"kubernetes.io/projected/b3978082-731c-497f-b541-8895cafd521b-kube-api-access-5sv72\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.484787 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-dbus-socket\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.586078 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5trb\" (UniqueName: \"kubernetes.io/projected/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-kube-api-access-x5trb\") pod \"nmstate-webhook-5f6d4c5ccb-nlx7g\" (UID: \"5f1f2d59-f67c-47aa-b66a-84b647b9f52a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.586357 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sv72\" (UniqueName: \"kubernetes.io/projected/b3978082-731c-497f-b541-8895cafd521b-kube-api-access-5sv72\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.586434 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-dbus-socket\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.586517 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-ovs-socket\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.586621 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mct57\" (UniqueName: \"kubernetes.io/projected/131ea2bb-55cd-4f14-aa33-7600dc569c3f-kube-api-access-mct57\") pod \"nmstate-metrics-7f946cbc9-7rj8z\" (UID: \"131ea2bb-55cd-4f14-aa33-7600dc569c3f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.586720 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-nmstate-lock\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.586799 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-nlx7g\" (UID: \"5f1f2d59-f67c-47aa-b66a-84b647b9f52a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:33 crc kubenswrapper[4731]: E1129 07:19:33.587012 4731 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 29 07:19:33 crc kubenswrapper[4731]: E1129 07:19:33.587144 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-tls-key-pair podName:5f1f2d59-f67c-47aa-b66a-84b647b9f52a nodeName:}" failed. No retries permitted until 2025-11-29 07:19:34.087117665 +0000 UTC m=+812.977478758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-nlx7g" (UID: "5f1f2d59-f67c-47aa-b66a-84b647b9f52a") : secret "openshift-nmstate-webhook" not found Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.587993 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-dbus-socket\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.588114 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-ovs-socket\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.588365 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b3978082-731c-497f-b541-8895cafd521b-nmstate-lock\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.598243 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp"] Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.599296 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.603514 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5gqb5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.603784 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.610468 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp"] Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.610632 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.616115 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mct57\" (UniqueName: \"kubernetes.io/projected/131ea2bb-55cd-4f14-aa33-7600dc569c3f-kube-api-access-mct57\") pod \"nmstate-metrics-7f946cbc9-7rj8z\" (UID: \"131ea2bb-55cd-4f14-aa33-7600dc569c3f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.637413 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sv72\" (UniqueName: \"kubernetes.io/projected/b3978082-731c-497f-b541-8895cafd521b-kube-api-access-5sv72\") pod \"nmstate-handler-qc6n5\" (UID: \"b3978082-731c-497f-b541-8895cafd521b\") " pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.668199 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5trb\" (UniqueName: \"kubernetes.io/projected/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-kube-api-access-x5trb\") pod \"nmstate-webhook-5f6d4c5ccb-nlx7g\" (UID: \"5f1f2d59-f67c-47aa-b66a-84b647b9f52a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.688107 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d634c867-9935-4736-84e5-7abcad360e79-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.688210 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d634c867-9935-4736-84e5-7abcad360e79-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.688241 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4hwm\" (UniqueName: \"kubernetes.io/projected/d634c867-9935-4736-84e5-7abcad360e79-kube-api-access-f4hwm\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.750651 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.788870 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d634c867-9935-4736-84e5-7abcad360e79-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.788919 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4hwm\" (UniqueName: \"kubernetes.io/projected/d634c867-9935-4736-84e5-7abcad360e79-kube-api-access-f4hwm\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.789018 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d634c867-9935-4736-84e5-7abcad360e79-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.789405 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.790034 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d634c867-9935-4736-84e5-7abcad360e79-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.795701 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d634c867-9935-4736-84e5-7abcad360e79-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.811128 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4hwm\" (UniqueName: \"kubernetes.io/projected/d634c867-9935-4736-84e5-7abcad360e79-kube-api-access-f4hwm\") pod \"nmstate-console-plugin-7fbb5f6569-zr4mp\" (UID: \"d634c867-9935-4736-84e5-7abcad360e79\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.849892 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c66cf454b-bbcxp"] Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.850807 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.853630 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c66cf454b-bbcxp"] Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.993313 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.994468 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-oauth-config\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.994857 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-trusted-ca-bundle\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.994881 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-config\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.994931 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-service-ca\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.994960 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-oauth-serving-cert\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.994983 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlrdc\" (UniqueName: \"kubernetes.io/projected/bbcb1eb6-3242-41d1-9194-7c09728a97a7-kube-api-access-tlrdc\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:33 crc kubenswrapper[4731]: I1129 07:19:33.995032 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-serving-cert\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.038376 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z"] Nov 29 07:19:34 crc kubenswrapper[4731]: W1129 07:19:34.042137 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod131ea2bb_55cd_4f14_aa33_7600dc569c3f.slice/crio-7df8c460655200dae442deadc1e3fb622148983e2b8487abf7c9a733f036db72 WatchSource:0}: Error finding container 7df8c460655200dae442deadc1e3fb622148983e2b8487abf7c9a733f036db72: Status 404 returned error can't find the container with id 7df8c460655200dae442deadc1e3fb622148983e2b8487abf7c9a733f036db72 Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.095787 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-service-ca\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.095846 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-oauth-serving-cert\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.095875 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlrdc\" (UniqueName: \"kubernetes.io/projected/bbcb1eb6-3242-41d1-9194-7c09728a97a7-kube-api-access-tlrdc\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.095902 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-nlx7g\" (UID: \"5f1f2d59-f67c-47aa-b66a-84b647b9f52a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.095928 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-serving-cert\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.095969 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-oauth-config\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.095997 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-trusted-ca-bundle\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.096015 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-config\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.097069 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-service-ca\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.097187 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-oauth-serving-cert\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.098060 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-config\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.098097 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbcb1eb6-3242-41d1-9194-7c09728a97a7-trusted-ca-bundle\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.104370 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f1f2d59-f67c-47aa-b66a-84b647b9f52a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-nlx7g\" (UID: \"5f1f2d59-f67c-47aa-b66a-84b647b9f52a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.105463 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-serving-cert\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.106218 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbcb1eb6-3242-41d1-9194-7c09728a97a7-console-oauth-config\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.116231 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlrdc\" (UniqueName: \"kubernetes.io/projected/bbcb1eb6-3242-41d1-9194-7c09728a97a7-kube-api-access-tlrdc\") pod \"console-6c66cf454b-bbcxp\" (UID: \"bbcb1eb6-3242-41d1-9194-7c09728a97a7\") " pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.173555 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.254219 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qc6n5" event={"ID":"b3978082-731c-497f-b541-8895cafd521b","Type":"ContainerStarted","Data":"9a70375dd70d7974163ce762cc777c8f3c9b902ae5c32143380a7edc0613a348"} Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.256390 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" event={"ID":"131ea2bb-55cd-4f14-aa33-7600dc569c3f","Type":"ContainerStarted","Data":"7df8c460655200dae442deadc1e3fb622148983e2b8487abf7c9a733f036db72"} Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.370299 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.375433 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c66cf454b-bbcxp"] Nov 29 07:19:34 crc kubenswrapper[4731]: W1129 07:19:34.382467 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbcb1eb6_3242_41d1_9194_7c09728a97a7.slice/crio-fc835b6379d6302524b13370ea455552afe88945b8afa4868496d350fdd7eaf3 WatchSource:0}: Error finding container fc835b6379d6302524b13370ea455552afe88945b8afa4868496d350fdd7eaf3: Status 404 returned error can't find the container with id fc835b6379d6302524b13370ea455552afe88945b8afa4868496d350fdd7eaf3 Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.406803 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp"] Nov 29 07:19:34 crc kubenswrapper[4731]: W1129 07:19:34.413832 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd634c867_9935_4736_84e5_7abcad360e79.slice/crio-b3dc26a1401c1a3daefe19edf38ccc43ca36d1ec02748e20fd64493137eda88f WatchSource:0}: Error finding container b3dc26a1401c1a3daefe19edf38ccc43ca36d1ec02748e20fd64493137eda88f: Status 404 returned error can't find the container with id b3dc26a1401c1a3daefe19edf38ccc43ca36d1ec02748e20fd64493137eda88f Nov 29 07:19:34 crc kubenswrapper[4731]: I1129 07:19:34.578361 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g"] Nov 29 07:19:34 crc kubenswrapper[4731]: W1129 07:19:34.585688 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f1f2d59_f67c_47aa_b66a_84b647b9f52a.slice/crio-a8625027a9315496646ff822c078b22491b69f24aed49ff963e1f80f6a9807c9 WatchSource:0}: Error finding container a8625027a9315496646ff822c078b22491b69f24aed49ff963e1f80f6a9807c9: Status 404 returned error can't find the container with id a8625027a9315496646ff822c078b22491b69f24aed49ff963e1f80f6a9807c9 Nov 29 07:19:35 crc kubenswrapper[4731]: I1129 07:19:35.267423 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c66cf454b-bbcxp" event={"ID":"bbcb1eb6-3242-41d1-9194-7c09728a97a7","Type":"ContainerStarted","Data":"f5ea1a201ce59ac3b86ec224f5c0c7f2a3dfdddeb2a767fa098e906ce781d3cf"} Nov 29 07:19:35 crc kubenswrapper[4731]: I1129 07:19:35.268050 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c66cf454b-bbcxp" event={"ID":"bbcb1eb6-3242-41d1-9194-7c09728a97a7","Type":"ContainerStarted","Data":"fc835b6379d6302524b13370ea455552afe88945b8afa4868496d350fdd7eaf3"} Nov 29 07:19:35 crc kubenswrapper[4731]: I1129 07:19:35.268981 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" event={"ID":"5f1f2d59-f67c-47aa-b66a-84b647b9f52a","Type":"ContainerStarted","Data":"a8625027a9315496646ff822c078b22491b69f24aed49ff963e1f80f6a9807c9"} Nov 29 07:19:35 crc kubenswrapper[4731]: I1129 07:19:35.269851 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" event={"ID":"d634c867-9935-4736-84e5-7abcad360e79","Type":"ContainerStarted","Data":"b3dc26a1401c1a3daefe19edf38ccc43ca36d1ec02748e20fd64493137eda88f"} Nov 29 07:19:35 crc kubenswrapper[4731]: I1129 07:19:35.292388 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c66cf454b-bbcxp" podStartSLOduration=2.292360397 podStartE2EDuration="2.292360397s" podCreationTimestamp="2025-11-29 07:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:19:35.291116101 +0000 UTC m=+814.181477204" watchObservedRunningTime="2025-11-29 07:19:35.292360397 +0000 UTC m=+814.182721500" Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.303436 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" event={"ID":"5f1f2d59-f67c-47aa-b66a-84b647b9f52a","Type":"ContainerStarted","Data":"540c105658fc769b6ac253a36b4f43cd2d1d09ddd3c50fcb9098a28d2f74eeb0"} Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.304947 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.309228 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" event={"ID":"d634c867-9935-4736-84e5-7abcad360e79","Type":"ContainerStarted","Data":"9e17d5cfd9a45033746213dbc174350ac164f89dd21498d9695cf48521e7c346"} Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.311021 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qc6n5" event={"ID":"b3978082-731c-497f-b541-8895cafd521b","Type":"ContainerStarted","Data":"3e0571b2a9b4aa3587af0bee45a5d6be57621ff922ec90c1811cc8170884382b"} Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.311072 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.314158 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" event={"ID":"131ea2bb-55cd-4f14-aa33-7600dc569c3f","Type":"ContainerStarted","Data":"47e180333e52a9a1dc8e0d519cc7fab165d6d0101b176594bb394b40c2af15d1"} Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.351765 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" podStartSLOduration=1.955300869 podStartE2EDuration="4.351735086s" podCreationTimestamp="2025-11-29 07:19:33 +0000 UTC" firstStartedPulling="2025-11-29 07:19:34.599583676 +0000 UTC m=+813.489944779" lastFinishedPulling="2025-11-29 07:19:36.996017893 +0000 UTC m=+815.886378996" observedRunningTime="2025-11-29 07:19:37.329754664 +0000 UTC m=+816.220115797" watchObservedRunningTime="2025-11-29 07:19:37.351735086 +0000 UTC m=+816.242096209" Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.353427 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qc6n5" podStartSLOduration=1.864602112 podStartE2EDuration="4.353416195s" podCreationTimestamp="2025-11-29 07:19:33 +0000 UTC" firstStartedPulling="2025-11-29 07:19:33.85966042 +0000 UTC m=+812.750021523" lastFinishedPulling="2025-11-29 07:19:36.348474503 +0000 UTC m=+815.238835606" observedRunningTime="2025-11-29 07:19:37.35323458 +0000 UTC m=+816.243595693" watchObservedRunningTime="2025-11-29 07:19:37.353416195 +0000 UTC m=+816.243777298" Nov 29 07:19:37 crc kubenswrapper[4731]: I1129 07:19:37.387296 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zr4mp" podStartSLOduration=1.9454238510000001 podStartE2EDuration="4.387257983s" podCreationTimestamp="2025-11-29 07:19:33 +0000 UTC" firstStartedPulling="2025-11-29 07:19:34.417021978 +0000 UTC m=+813.307383081" lastFinishedPulling="2025-11-29 07:19:36.85885611 +0000 UTC m=+815.749217213" observedRunningTime="2025-11-29 07:19:37.378083045 +0000 UTC m=+816.268444168" watchObservedRunningTime="2025-11-29 07:19:37.387257983 +0000 UTC m=+816.277619096" Nov 29 07:19:39 crc kubenswrapper[4731]: I1129 07:19:39.331830 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" event={"ID":"131ea2bb-55cd-4f14-aa33-7600dc569c3f","Type":"ContainerStarted","Data":"faf9cfe6a74aa7b607c4624279a5736cf8e106ffe45d5429e7f22fe7bbba5e12"} Nov 29 07:19:39 crc kubenswrapper[4731]: I1129 07:19:39.354716 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7rj8z" podStartSLOduration=1.36017934 podStartE2EDuration="6.354680037s" podCreationTimestamp="2025-11-29 07:19:33 +0000 UTC" firstStartedPulling="2025-11-29 07:19:34.045271108 +0000 UTC m=+812.935632211" lastFinishedPulling="2025-11-29 07:19:39.039771805 +0000 UTC m=+817.930132908" observedRunningTime="2025-11-29 07:19:39.349778864 +0000 UTC m=+818.240139997" watchObservedRunningTime="2025-11-29 07:19:39.354680037 +0000 UTC m=+818.245041140" Nov 29 07:19:43 crc kubenswrapper[4731]: I1129 07:19:43.820182 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qc6n5" Nov 29 07:19:44 crc kubenswrapper[4731]: I1129 07:19:44.174772 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:44 crc kubenswrapper[4731]: I1129 07:19:44.174825 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:44 crc kubenswrapper[4731]: I1129 07:19:44.179481 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:44 crc kubenswrapper[4731]: I1129 07:19:44.367517 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c66cf454b-bbcxp" Nov 29 07:19:44 crc kubenswrapper[4731]: I1129 07:19:44.424453 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-htrhs"] Nov 29 07:19:54 crc kubenswrapper[4731]: I1129 07:19:54.377322 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nlx7g" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.116602 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk"] Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.118936 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.121915 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.123770 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk"] Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.199154 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.199226 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.199498 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99tr\" (UniqueName: \"kubernetes.io/projected/b430eace-40ed-4d3d-ae05-481052d89eb8-kube-api-access-g99tr\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.300851 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.300923 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.300989 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g99tr\" (UniqueName: \"kubernetes.io/projected/b430eace-40ed-4d3d-ae05-481052d89eb8-kube-api-access-g99tr\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.301656 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.301658 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.323465 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99tr\" (UniqueName: \"kubernetes.io/projected/b430eace-40ed-4d3d-ae05-481052d89eb8-kube-api-access-g99tr\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.441150 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.464299 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-htrhs" podUID="55949699-24bb-4705-8bf0-db1dd651d387" containerName="console" containerID="cri-o://abdd64ce7fc79e33848fd59c44c41d124dfce45bd7876efa6e2dc1db8861dd54" gracePeriod=15 Nov 29 07:20:09 crc kubenswrapper[4731]: I1129 07:20:09.662205 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk"] Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.537198 4731 generic.go:334] "Generic (PLEG): container finished" podID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerID="20e3b86efd3f1cc44a815d3da017783af09e8638af4ee2bcebbea53849f8098a" exitCode=0 Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.537305 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" event={"ID":"b430eace-40ed-4d3d-ae05-481052d89eb8","Type":"ContainerDied","Data":"20e3b86efd3f1cc44a815d3da017783af09e8638af4ee2bcebbea53849f8098a"} Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.537592 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" event={"ID":"b430eace-40ed-4d3d-ae05-481052d89eb8","Type":"ContainerStarted","Data":"93f9530d8b27d81db936f76f6bbf9e6a709007549a7bf1aeaec75fc08a168c6c"} Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.540700 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-htrhs_55949699-24bb-4705-8bf0-db1dd651d387/console/0.log" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.540732 4731 generic.go:334] "Generic (PLEG): container finished" podID="55949699-24bb-4705-8bf0-db1dd651d387" containerID="abdd64ce7fc79e33848fd59c44c41d124dfce45bd7876efa6e2dc1db8861dd54" exitCode=2 Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.540754 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-htrhs" event={"ID":"55949699-24bb-4705-8bf0-db1dd651d387","Type":"ContainerDied","Data":"abdd64ce7fc79e33848fd59c44c41d124dfce45bd7876efa6e2dc1db8861dd54"} Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.919008 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-htrhs_55949699-24bb-4705-8bf0-db1dd651d387/console/0.log" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.919097 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.925904 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-console-config\") pod \"55949699-24bb-4705-8bf0-db1dd651d387\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.925995 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-oauth-serving-cert\") pod \"55949699-24bb-4705-8bf0-db1dd651d387\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.926036 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-serving-cert\") pod \"55949699-24bb-4705-8bf0-db1dd651d387\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.926101 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwfkt\" (UniqueName: \"kubernetes.io/projected/55949699-24bb-4705-8bf0-db1dd651d387-kube-api-access-fwfkt\") pod \"55949699-24bb-4705-8bf0-db1dd651d387\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.926152 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-oauth-config\") pod \"55949699-24bb-4705-8bf0-db1dd651d387\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.926170 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-trusted-ca-bundle\") pod \"55949699-24bb-4705-8bf0-db1dd651d387\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.926190 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-service-ca\") pod \"55949699-24bb-4705-8bf0-db1dd651d387\" (UID: \"55949699-24bb-4705-8bf0-db1dd651d387\") " Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.927140 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-console-config" (OuterVolumeSpecName: "console-config") pod "55949699-24bb-4705-8bf0-db1dd651d387" (UID: "55949699-24bb-4705-8bf0-db1dd651d387"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.927524 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "55949699-24bb-4705-8bf0-db1dd651d387" (UID: "55949699-24bb-4705-8bf0-db1dd651d387"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.927633 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "55949699-24bb-4705-8bf0-db1dd651d387" (UID: "55949699-24bb-4705-8bf0-db1dd651d387"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.928001 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-service-ca" (OuterVolumeSpecName: "service-ca") pod "55949699-24bb-4705-8bf0-db1dd651d387" (UID: "55949699-24bb-4705-8bf0-db1dd651d387"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.935579 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "55949699-24bb-4705-8bf0-db1dd651d387" (UID: "55949699-24bb-4705-8bf0-db1dd651d387"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.935629 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55949699-24bb-4705-8bf0-db1dd651d387-kube-api-access-fwfkt" (OuterVolumeSpecName: "kube-api-access-fwfkt") pod "55949699-24bb-4705-8bf0-db1dd651d387" (UID: "55949699-24bb-4705-8bf0-db1dd651d387"). InnerVolumeSpecName "kube-api-access-fwfkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:20:10 crc kubenswrapper[4731]: I1129 07:20:10.936197 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "55949699-24bb-4705-8bf0-db1dd651d387" (UID: "55949699-24bb-4705-8bf0-db1dd651d387"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.027804 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwfkt\" (UniqueName: \"kubernetes.io/projected/55949699-24bb-4705-8bf0-db1dd651d387-kube-api-access-fwfkt\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.027853 4731 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.027869 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.027885 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.027898 4731 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.027906 4731 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55949699-24bb-4705-8bf0-db1dd651d387-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.027916 4731 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55949699-24bb-4705-8bf0-db1dd651d387-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.549357 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-htrhs_55949699-24bb-4705-8bf0-db1dd651d387/console/0.log" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.549715 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-htrhs" event={"ID":"55949699-24bb-4705-8bf0-db1dd651d387","Type":"ContainerDied","Data":"69af07861251c1200e6c7545db480802cadd7c2bf98b48e16e4d149de52f526b"} Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.549766 4731 scope.go:117] "RemoveContainer" containerID="abdd64ce7fc79e33848fd59c44c41d124dfce45bd7876efa6e2dc1db8861dd54" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.549781 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-htrhs" Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.580253 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-htrhs"] Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.587177 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-htrhs"] Nov 29 07:20:11 crc kubenswrapper[4731]: I1129 07:20:11.815083 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55949699-24bb-4705-8bf0-db1dd651d387" path="/var/lib/kubelet/pods/55949699-24bb-4705-8bf0-db1dd651d387/volumes" Nov 29 07:20:12 crc kubenswrapper[4731]: I1129 07:20:12.560106 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" event={"ID":"b430eace-40ed-4d3d-ae05-481052d89eb8","Type":"ContainerStarted","Data":"5d320c62b0d759436deb522c58b6b0b08bc937213f8d86c2f778c5f481339d7d"} Nov 29 07:20:13 crc kubenswrapper[4731]: I1129 07:20:13.572342 4731 generic.go:334] "Generic (PLEG): container finished" podID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerID="5d320c62b0d759436deb522c58b6b0b08bc937213f8d86c2f778c5f481339d7d" exitCode=0 Nov 29 07:20:13 crc kubenswrapper[4731]: I1129 07:20:13.572451 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" event={"ID":"b430eace-40ed-4d3d-ae05-481052d89eb8","Type":"ContainerDied","Data":"5d320c62b0d759436deb522c58b6b0b08bc937213f8d86c2f778c5f481339d7d"} Nov 29 07:20:14 crc kubenswrapper[4731]: I1129 07:20:14.598751 4731 generic.go:334] "Generic (PLEG): container finished" podID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerID="e6017df4093c869e823076a41387a3013dcce48f89ff4d59619f107bb8175aba" exitCode=0 Nov 29 07:20:14 crc kubenswrapper[4731]: I1129 07:20:14.598850 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" event={"ID":"b430eace-40ed-4d3d-ae05-481052d89eb8","Type":"ContainerDied","Data":"e6017df4093c869e823076a41387a3013dcce48f89ff4d59619f107bb8175aba"} Nov 29 07:20:15 crc kubenswrapper[4731]: I1129 07:20:15.873256 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:15 crc kubenswrapper[4731]: I1129 07:20:15.914667 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g99tr\" (UniqueName: \"kubernetes.io/projected/b430eace-40ed-4d3d-ae05-481052d89eb8-kube-api-access-g99tr\") pod \"b430eace-40ed-4d3d-ae05-481052d89eb8\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " Nov 29 07:20:15 crc kubenswrapper[4731]: I1129 07:20:15.914745 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-bundle\") pod \"b430eace-40ed-4d3d-ae05-481052d89eb8\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " Nov 29 07:20:15 crc kubenswrapper[4731]: I1129 07:20:15.914813 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-util\") pod \"b430eace-40ed-4d3d-ae05-481052d89eb8\" (UID: \"b430eace-40ed-4d3d-ae05-481052d89eb8\") " Nov 29 07:20:15 crc kubenswrapper[4731]: I1129 07:20:15.916371 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-bundle" (OuterVolumeSpecName: "bundle") pod "b430eace-40ed-4d3d-ae05-481052d89eb8" (UID: "b430eace-40ed-4d3d-ae05-481052d89eb8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:20:15 crc kubenswrapper[4731]: I1129 07:20:15.922340 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b430eace-40ed-4d3d-ae05-481052d89eb8-kube-api-access-g99tr" (OuterVolumeSpecName: "kube-api-access-g99tr") pod "b430eace-40ed-4d3d-ae05-481052d89eb8" (UID: "b430eace-40ed-4d3d-ae05-481052d89eb8"). InnerVolumeSpecName "kube-api-access-g99tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:20:15 crc kubenswrapper[4731]: I1129 07:20:15.932227 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-util" (OuterVolumeSpecName: "util") pod "b430eace-40ed-4d3d-ae05-481052d89eb8" (UID: "b430eace-40ed-4d3d-ae05-481052d89eb8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:20:16 crc kubenswrapper[4731]: I1129 07:20:16.016902 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g99tr\" (UniqueName: \"kubernetes.io/projected/b430eace-40ed-4d3d-ae05-481052d89eb8-kube-api-access-g99tr\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:16 crc kubenswrapper[4731]: I1129 07:20:16.016931 4731 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:16 crc kubenswrapper[4731]: I1129 07:20:16.016942 4731 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b430eace-40ed-4d3d-ae05-481052d89eb8-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:16 crc kubenswrapper[4731]: I1129 07:20:16.617307 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" event={"ID":"b430eace-40ed-4d3d-ae05-481052d89eb8","Type":"ContainerDied","Data":"93f9530d8b27d81db936f76f6bbf9e6a709007549a7bf1aeaec75fc08a168c6c"} Nov 29 07:20:16 crc kubenswrapper[4731]: I1129 07:20:16.617781 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f9530d8b27d81db936f76f6bbf9e6a709007549a7bf1aeaec75fc08a168c6c" Nov 29 07:20:16 crc kubenswrapper[4731]: I1129 07:20:16.617380 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.277934 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h"] Nov 29 07:20:27 crc kubenswrapper[4731]: E1129 07:20:27.278848 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerName="util" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.278868 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerName="util" Nov 29 07:20:27 crc kubenswrapper[4731]: E1129 07:20:27.278884 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerName="extract" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.278893 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerName="extract" Nov 29 07:20:27 crc kubenswrapper[4731]: E1129 07:20:27.278905 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55949699-24bb-4705-8bf0-db1dd651d387" containerName="console" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.278912 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="55949699-24bb-4705-8bf0-db1dd651d387" containerName="console" Nov 29 07:20:27 crc kubenswrapper[4731]: E1129 07:20:27.278940 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerName="pull" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.278947 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerName="pull" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.279083 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b430eace-40ed-4d3d-ae05-481052d89eb8" containerName="extract" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.279096 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="55949699-24bb-4705-8bf0-db1dd651d387" containerName="console" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.279620 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.281969 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dqbnj" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.283676 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.283792 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.287487 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.287641 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.302669 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h"] Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.388130 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52af2912-39d1-447f-b652-bf5afab67ce5-webhook-cert\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.388193 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnql\" (UniqueName: \"kubernetes.io/projected/52af2912-39d1-447f-b652-bf5afab67ce5-kube-api-access-sxnql\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.388309 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52af2912-39d1-447f-b652-bf5afab67ce5-apiservice-cert\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.489056 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52af2912-39d1-447f-b652-bf5afab67ce5-apiservice-cert\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.489149 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52af2912-39d1-447f-b652-bf5afab67ce5-webhook-cert\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.489189 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxnql\" (UniqueName: \"kubernetes.io/projected/52af2912-39d1-447f-b652-bf5afab67ce5-kube-api-access-sxnql\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.497374 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52af2912-39d1-447f-b652-bf5afab67ce5-webhook-cert\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.500008 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52af2912-39d1-447f-b652-bf5afab67ce5-apiservice-cert\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.508936 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxnql\" (UniqueName: \"kubernetes.io/projected/52af2912-39d1-447f-b652-bf5afab67ce5-kube-api-access-sxnql\") pod \"metallb-operator-controller-manager-5cfcff49c6-4hw8h\" (UID: \"52af2912-39d1-447f-b652-bf5afab67ce5\") " pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.599840 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.731145 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq"] Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.732271 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.738790 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6pt89" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.738934 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.739039 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.765866 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq"] Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.793319 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71540c78-2cb5-4df0-b9be-a0224d7211f1-apiservice-cert\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.793452 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvvg\" (UniqueName: \"kubernetes.io/projected/71540c78-2cb5-4df0-b9be-a0224d7211f1-kube-api-access-bnvvg\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.793513 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71540c78-2cb5-4df0-b9be-a0224d7211f1-webhook-cert\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.903048 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71540c78-2cb5-4df0-b9be-a0224d7211f1-apiservice-cert\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.903177 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnvvg\" (UniqueName: \"kubernetes.io/projected/71540c78-2cb5-4df0-b9be-a0224d7211f1-kube-api-access-bnvvg\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.903221 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71540c78-2cb5-4df0-b9be-a0224d7211f1-webhook-cert\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.951545 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71540c78-2cb5-4df0-b9be-a0224d7211f1-webhook-cert\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.952042 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71540c78-2cb5-4df0-b9be-a0224d7211f1-apiservice-cert\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:27 crc kubenswrapper[4731]: I1129 07:20:27.956363 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnvvg\" (UniqueName: \"kubernetes.io/projected/71540c78-2cb5-4df0-b9be-a0224d7211f1-kube-api-access-bnvvg\") pod \"metallb-operator-webhook-server-76559b7b9c-66rgq\" (UID: \"71540c78-2cb5-4df0-b9be-a0224d7211f1\") " pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:28 crc kubenswrapper[4731]: I1129 07:20:28.073198 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:28 crc kubenswrapper[4731]: I1129 07:20:28.121335 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h"] Nov 29 07:20:28 crc kubenswrapper[4731]: I1129 07:20:28.327568 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq"] Nov 29 07:20:28 crc kubenswrapper[4731]: W1129 07:20:28.331960 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71540c78_2cb5_4df0_b9be_a0224d7211f1.slice/crio-51816f626b9b527101ed3a62576aa39a62b85b0bc270e03637b86c9504d88c95 WatchSource:0}: Error finding container 51816f626b9b527101ed3a62576aa39a62b85b0bc270e03637b86c9504d88c95: Status 404 returned error can't find the container with id 51816f626b9b527101ed3a62576aa39a62b85b0bc270e03637b86c9504d88c95 Nov 29 07:20:28 crc kubenswrapper[4731]: I1129 07:20:28.692745 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" event={"ID":"71540c78-2cb5-4df0-b9be-a0224d7211f1","Type":"ContainerStarted","Data":"51816f626b9b527101ed3a62576aa39a62b85b0bc270e03637b86c9504d88c95"} Nov 29 07:20:28 crc kubenswrapper[4731]: I1129 07:20:28.694172 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" event={"ID":"52af2912-39d1-447f-b652-bf5afab67ce5","Type":"ContainerStarted","Data":"61e2abdee91e16242df3eee58f9ece50e74daedb992b4e5a4efed36654eea02a"} Nov 29 07:20:33 crc kubenswrapper[4731]: I1129 07:20:33.728703 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" event={"ID":"71540c78-2cb5-4df0-b9be-a0224d7211f1","Type":"ContainerStarted","Data":"284a895ddf27847a0970ede27f97a6cd4cc69da803e644b79ffa10e11ebbcb68"} Nov 29 07:20:33 crc kubenswrapper[4731]: I1129 07:20:33.729368 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:20:33 crc kubenswrapper[4731]: I1129 07:20:33.731193 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" event={"ID":"52af2912-39d1-447f-b652-bf5afab67ce5","Type":"ContainerStarted","Data":"1851384ee107e6722ccf7fce4ad51bb500fa43f6984fa3b82b441d6bdb24ee39"} Nov 29 07:20:33 crc kubenswrapper[4731]: I1129 07:20:33.731438 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:20:33 crc kubenswrapper[4731]: I1129 07:20:33.763137 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" podStartSLOduration=2.148306826 podStartE2EDuration="6.763118803s" podCreationTimestamp="2025-11-29 07:20:27 +0000 UTC" firstStartedPulling="2025-11-29 07:20:28.339791751 +0000 UTC m=+867.230152854" lastFinishedPulling="2025-11-29 07:20:32.954603728 +0000 UTC m=+871.844964831" observedRunningTime="2025-11-29 07:20:33.759774345 +0000 UTC m=+872.650135458" watchObservedRunningTime="2025-11-29 07:20:33.763118803 +0000 UTC m=+872.653479906" Nov 29 07:20:33 crc kubenswrapper[4731]: I1129 07:20:33.794627 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" podStartSLOduration=1.9818299160000001 podStartE2EDuration="6.794598513s" podCreationTimestamp="2025-11-29 07:20:27 +0000 UTC" firstStartedPulling="2025-11-29 07:20:28.138642368 +0000 UTC m=+867.029003471" lastFinishedPulling="2025-11-29 07:20:32.951410975 +0000 UTC m=+871.841772068" observedRunningTime="2025-11-29 07:20:33.793396958 +0000 UTC m=+872.683758061" watchObservedRunningTime="2025-11-29 07:20:33.794598513 +0000 UTC m=+872.684959616" Nov 29 07:20:48 crc kubenswrapper[4731]: I1129 07:20:48.080762 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-76559b7b9c-66rgq" Nov 29 07:21:03 crc kubenswrapper[4731]: I1129 07:21:03.002708 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:21:03 crc kubenswrapper[4731]: I1129 07:21:03.003128 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:21:07 crc kubenswrapper[4731]: I1129 07:21:07.603233 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5cfcff49c6-4hw8h" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.506939 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bspvf"] Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.509301 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.515585 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.515625 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.515650 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nq869" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.522314 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf"] Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.523414 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.526251 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.537041 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf"] Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.626818 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wnwx5"] Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.629027 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.635352 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.636269 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-djslj" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.636399 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.636501 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.649357 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-6cjn7"] Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.661919 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-6cjn7"] Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.662253 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.659457 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5rck\" (UniqueName: \"kubernetes.io/projected/c55c52fa-65fc-45fd-b266-10af88f3cead-kube-api-access-j5rck\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.664120 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-sockets\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.664259 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c55c52fa-65fc-45fd-b266-10af88f3cead-metrics-certs\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.664395 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4lkl\" (UniqueName: \"kubernetes.io/projected/487416b3-29bc-4302-b40d-faf6c56a568f-kube-api-access-c4lkl\") pod \"frr-k8s-webhook-server-7fcb986d4-5hdrf\" (UID: \"487416b3-29bc-4302-b40d-faf6c56a568f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.664635 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/487416b3-29bc-4302-b40d-faf6c56a568f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-5hdrf\" (UID: \"487416b3-29bc-4302-b40d-faf6c56a568f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.664491 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.665284 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-reloader\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.665478 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-metrics\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.665663 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-startup\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.665935 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-conf\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.766988 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dtcn\" (UniqueName: \"kubernetes.io/projected/ef219519-fac4-4496-a040-519702060736-kube-api-access-4dtcn\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.767311 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-sockets\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.767429 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c55c52fa-65fc-45fd-b266-10af88f3cead-metrics-certs\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.767539 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4lkl\" (UniqueName: \"kubernetes.io/projected/487416b3-29bc-4302-b40d-faf6c56a568f-kube-api-access-c4lkl\") pod \"frr-k8s-webhook-server-7fcb986d4-5hdrf\" (UID: \"487416b3-29bc-4302-b40d-faf6c56a568f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.767684 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef219519-fac4-4496-a040-519702060736-cert\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.767818 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/487416b3-29bc-4302-b40d-faf6c56a568f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-5hdrf\" (UID: \"487416b3-29bc-4302-b40d-faf6c56a568f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.767950 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p6bn\" (UniqueName: \"kubernetes.io/projected/e0c889d5-42f1-4ac2-9a90-91b8e2414937-kube-api-access-2p6bn\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768070 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-metrics-certs\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768184 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e0c889d5-42f1-4ac2-9a90-91b8e2414937-metallb-excludel2\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768287 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-reloader\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768391 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-metrics\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768482 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef219519-fac4-4496-a040-519702060736-metrics-certs\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768589 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-startup\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768679 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-memberlist\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768784 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-conf\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.768888 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5rck\" (UniqueName: \"kubernetes.io/projected/c55c52fa-65fc-45fd-b266-10af88f3cead-kube-api-access-j5rck\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.770245 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-sockets\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.771557 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-reloader\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: E1129 07:21:08.771592 4731 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 29 07:21:08 crc kubenswrapper[4731]: E1129 07:21:08.771814 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/487416b3-29bc-4302-b40d-faf6c56a568f-cert podName:487416b3-29bc-4302-b40d-faf6c56a568f nodeName:}" failed. No retries permitted until 2025-11-29 07:21:09.271784377 +0000 UTC m=+908.162145480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/487416b3-29bc-4302-b40d-faf6c56a568f-cert") pod "frr-k8s-webhook-server-7fcb986d4-5hdrf" (UID: "487416b3-29bc-4302-b40d-faf6c56a568f") : secret "frr-k8s-webhook-server-cert" not found Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.771856 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-conf\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.771890 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c55c52fa-65fc-45fd-b266-10af88f3cead-metrics\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.772703 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c55c52fa-65fc-45fd-b266-10af88f3cead-frr-startup\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.776289 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c55c52fa-65fc-45fd-b266-10af88f3cead-metrics-certs\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.797710 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4lkl\" (UniqueName: \"kubernetes.io/projected/487416b3-29bc-4302-b40d-faf6c56a568f-kube-api-access-c4lkl\") pod \"frr-k8s-webhook-server-7fcb986d4-5hdrf\" (UID: \"487416b3-29bc-4302-b40d-faf6c56a568f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.798176 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5rck\" (UniqueName: \"kubernetes.io/projected/c55c52fa-65fc-45fd-b266-10af88f3cead-kube-api-access-j5rck\") pod \"frr-k8s-bspvf\" (UID: \"c55c52fa-65fc-45fd-b266-10af88f3cead\") " pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.825742 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.869848 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dtcn\" (UniqueName: \"kubernetes.io/projected/ef219519-fac4-4496-a040-519702060736-kube-api-access-4dtcn\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.870175 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef219519-fac4-4496-a040-519702060736-cert\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.870315 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p6bn\" (UniqueName: \"kubernetes.io/projected/e0c889d5-42f1-4ac2-9a90-91b8e2414937-kube-api-access-2p6bn\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.870408 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-metrics-certs\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.870483 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e0c889d5-42f1-4ac2-9a90-91b8e2414937-metallb-excludel2\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.870600 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef219519-fac4-4496-a040-519702060736-metrics-certs\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.870690 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-memberlist\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: E1129 07:21:08.870823 4731 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:21:08 crc kubenswrapper[4731]: E1129 07:21:08.870940 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-memberlist podName:e0c889d5-42f1-4ac2-9a90-91b8e2414937 nodeName:}" failed. No retries permitted until 2025-11-29 07:21:09.370919987 +0000 UTC m=+908.261281080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-memberlist") pod "speaker-wnwx5" (UID: "e0c889d5-42f1-4ac2-9a90-91b8e2414937") : secret "metallb-memberlist" not found Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.871879 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e0c889d5-42f1-4ac2-9a90-91b8e2414937-metallb-excludel2\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.872981 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.876034 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef219519-fac4-4496-a040-519702060736-metrics-certs\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.886171 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-metrics-certs\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.888345 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dtcn\" (UniqueName: \"kubernetes.io/projected/ef219519-fac4-4496-a040-519702060736-kube-api-access-4dtcn\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.892102 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef219519-fac4-4496-a040-519702060736-cert\") pod \"controller-f8648f98b-6cjn7\" (UID: \"ef219519-fac4-4496-a040-519702060736\") " pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.893488 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p6bn\" (UniqueName: \"kubernetes.io/projected/e0c889d5-42f1-4ac2-9a90-91b8e2414937-kube-api-access-2p6bn\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:08 crc kubenswrapper[4731]: I1129 07:21:08.981187 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.184410 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-6cjn7"] Nov 29 07:21:09 crc kubenswrapper[4731]: W1129 07:21:09.190239 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef219519_fac4_4496_a040_519702060736.slice/crio-0ce60ccebe636819ddd40f385b84bc8201ac78a2d9d952a5653f82bc3a71b41d WatchSource:0}: Error finding container 0ce60ccebe636819ddd40f385b84bc8201ac78a2d9d952a5653f82bc3a71b41d: Status 404 returned error can't find the container with id 0ce60ccebe636819ddd40f385b84bc8201ac78a2d9d952a5653f82bc3a71b41d Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.278833 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/487416b3-29bc-4302-b40d-faf6c56a568f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-5hdrf\" (UID: \"487416b3-29bc-4302-b40d-faf6c56a568f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.284491 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/487416b3-29bc-4302-b40d-faf6c56a568f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-5hdrf\" (UID: \"487416b3-29bc-4302-b40d-faf6c56a568f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.380967 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-memberlist\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.385375 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e0c889d5-42f1-4ac2-9a90-91b8e2414937-memberlist\") pod \"speaker-wnwx5\" (UID: \"e0c889d5-42f1-4ac2-9a90-91b8e2414937\") " pod="metallb-system/speaker-wnwx5" Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.436765 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.547726 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wnwx5" Nov 29 07:21:09 crc kubenswrapper[4731]: W1129 07:21:09.576459 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0c889d5_42f1_4ac2_9a90_91b8e2414937.slice/crio-00da9312953997f2d0ccbca42ea7c34497639a4e30d454e1f1635851b3a11861 WatchSource:0}: Error finding container 00da9312953997f2d0ccbca42ea7c34497639a4e30d454e1f1635851b3a11861: Status 404 returned error can't find the container with id 00da9312953997f2d0ccbca42ea7c34497639a4e30d454e1f1635851b3a11861 Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.722306 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf"] Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.986160 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-6cjn7" event={"ID":"ef219519-fac4-4496-a040-519702060736","Type":"ContainerStarted","Data":"c1769ebfa20dbd52e33c50325cd6f935581a97a541cc3e6589e49fb8c3839789"} Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.986777 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.986799 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-6cjn7" event={"ID":"ef219519-fac4-4496-a040-519702060736","Type":"ContainerStarted","Data":"8b365ef2e8aa4316579f6937848bdd96dd1383a98908ba89b66fd8b654098a97"} Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.986814 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-6cjn7" event={"ID":"ef219519-fac4-4496-a040-519702060736","Type":"ContainerStarted","Data":"0ce60ccebe636819ddd40f385b84bc8201ac78a2d9d952a5653f82bc3a71b41d"} Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.990044 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" event={"ID":"487416b3-29bc-4302-b40d-faf6c56a568f","Type":"ContainerStarted","Data":"470af00cee0f3c611007444e3bdd3b24b6acd0326f0ccb275e56a36338657273"} Nov 29 07:21:09 crc kubenswrapper[4731]: I1129 07:21:09.999372 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerStarted","Data":"4a88524872fc48bd339ba807222dacb3151d058f7520f28c00c35416c55bcefe"} Nov 29 07:21:10 crc kubenswrapper[4731]: I1129 07:21:10.001293 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wnwx5" event={"ID":"e0c889d5-42f1-4ac2-9a90-91b8e2414937","Type":"ContainerStarted","Data":"84478827fd17099b4825cce4b5412126e51ded4230d51dc5954fca1ac61a4cf1"} Nov 29 07:21:10 crc kubenswrapper[4731]: I1129 07:21:10.001361 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wnwx5" event={"ID":"e0c889d5-42f1-4ac2-9a90-91b8e2414937","Type":"ContainerStarted","Data":"00da9312953997f2d0ccbca42ea7c34497639a4e30d454e1f1635851b3a11861"} Nov 29 07:21:11 crc kubenswrapper[4731]: I1129 07:21:11.023216 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wnwx5" event={"ID":"e0c889d5-42f1-4ac2-9a90-91b8e2414937","Type":"ContainerStarted","Data":"09e06da04978fb50102ecb7466894631953b8982cfba52f69e29cb72d0998023"} Nov 29 07:21:11 crc kubenswrapper[4731]: I1129 07:21:11.054117 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-6cjn7" podStartSLOduration=3.054094752 podStartE2EDuration="3.054094752s" podCreationTimestamp="2025-11-29 07:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:10.018919389 +0000 UTC m=+908.909280492" watchObservedRunningTime="2025-11-29 07:21:11.054094752 +0000 UTC m=+909.944455855" Nov 29 07:21:11 crc kubenswrapper[4731]: I1129 07:21:11.055945 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wnwx5" podStartSLOduration=3.055935315 podStartE2EDuration="3.055935315s" podCreationTimestamp="2025-11-29 07:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:11.055905665 +0000 UTC m=+909.946266798" watchObservedRunningTime="2025-11-29 07:21:11.055935315 +0000 UTC m=+909.946296418" Nov 29 07:21:12 crc kubenswrapper[4731]: I1129 07:21:12.042678 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wnwx5" Nov 29 07:21:17 crc kubenswrapper[4731]: I1129 07:21:17.083718 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" event={"ID":"487416b3-29bc-4302-b40d-faf6c56a568f","Type":"ContainerStarted","Data":"7a549a7d93262af5c96dfc4a1cc14762ac8106c379d4ef95de234ff0ca7644a5"} Nov 29 07:21:17 crc kubenswrapper[4731]: I1129 07:21:17.084504 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:17 crc kubenswrapper[4731]: I1129 07:21:17.086868 4731 generic.go:334] "Generic (PLEG): container finished" podID="c55c52fa-65fc-45fd-b266-10af88f3cead" containerID="aa9955c7ad025cd6e449336ef78c8300aef05b7749684e75c473e1f0fc29e39b" exitCode=0 Nov 29 07:21:17 crc kubenswrapper[4731]: I1129 07:21:17.086941 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerDied","Data":"aa9955c7ad025cd6e449336ef78c8300aef05b7749684e75c473e1f0fc29e39b"} Nov 29 07:21:17 crc kubenswrapper[4731]: I1129 07:21:17.114389 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" podStartSLOduration=2.384527771 podStartE2EDuration="9.11435887s" podCreationTimestamp="2025-11-29 07:21:08 +0000 UTC" firstStartedPulling="2025-11-29 07:21:09.770662389 +0000 UTC m=+908.661023492" lastFinishedPulling="2025-11-29 07:21:16.500493488 +0000 UTC m=+915.390854591" observedRunningTime="2025-11-29 07:21:17.109081476 +0000 UTC m=+915.999442589" watchObservedRunningTime="2025-11-29 07:21:17.11435887 +0000 UTC m=+916.004720013" Nov 29 07:21:18 crc kubenswrapper[4731]: I1129 07:21:18.096189 4731 generic.go:334] "Generic (PLEG): container finished" podID="c55c52fa-65fc-45fd-b266-10af88f3cead" containerID="1f22ab81ba2a2bc38463f7ba63afc2198e913e4d2cd897165fd70462800550db" exitCode=0 Nov 29 07:21:18 crc kubenswrapper[4731]: I1129 07:21:18.096276 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerDied","Data":"1f22ab81ba2a2bc38463f7ba63afc2198e913e4d2cd897165fd70462800550db"} Nov 29 07:21:19 crc kubenswrapper[4731]: I1129 07:21:19.107322 4731 generic.go:334] "Generic (PLEG): container finished" podID="c55c52fa-65fc-45fd-b266-10af88f3cead" containerID="5314ba2b2fa40746a35456e6fc97051c7cf02201fa4d400d0e723fa755570ff7" exitCode=0 Nov 29 07:21:19 crc kubenswrapper[4731]: I1129 07:21:19.107388 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerDied","Data":"5314ba2b2fa40746a35456e6fc97051c7cf02201fa4d400d0e723fa755570ff7"} Nov 29 07:21:19 crc kubenswrapper[4731]: I1129 07:21:19.552090 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wnwx5" Nov 29 07:21:20 crc kubenswrapper[4731]: I1129 07:21:20.147982 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerStarted","Data":"c5906b364dfdd2538d5bf8bd0b7e4781b1261ed87af63fa3d3ca20c8fbd632d4"} Nov 29 07:21:20 crc kubenswrapper[4731]: I1129 07:21:20.148059 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerStarted","Data":"623c7cf80bcd339f00737185676f69bef79ad25bfb18c8b7231af14371202116"} Nov 29 07:21:20 crc kubenswrapper[4731]: I1129 07:21:20.148076 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerStarted","Data":"5b4fef4bc64c92b307c03e68fe9fd404b8b1a0b7b1ec0f3ea735abfd28e99bee"} Nov 29 07:21:20 crc kubenswrapper[4731]: I1129 07:21:20.148090 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerStarted","Data":"58f5edf51332feba6a71d82ddd4bb70372181d5b40ba4546646120597b87cd94"} Nov 29 07:21:20 crc kubenswrapper[4731]: I1129 07:21:20.148104 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerStarted","Data":"84c1fae25dd98ca48ae1227e66e95847bf335576ca4b6ffdbe7992756d8d0fcc"} Nov 29 07:21:21 crc kubenswrapper[4731]: I1129 07:21:21.162119 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bspvf" event={"ID":"c55c52fa-65fc-45fd-b266-10af88f3cead","Type":"ContainerStarted","Data":"e5f6cd909c6f3879734a2ff043539982f68523d4b495d26a042e34334ed77464"} Nov 29 07:21:21 crc kubenswrapper[4731]: I1129 07:21:21.162511 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:21 crc kubenswrapper[4731]: I1129 07:21:21.185953 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bspvf" podStartSLOduration=5.666111598 podStartE2EDuration="13.18593s" podCreationTimestamp="2025-11-29 07:21:08 +0000 UTC" firstStartedPulling="2025-11-29 07:21:08.997296542 +0000 UTC m=+907.887657655" lastFinishedPulling="2025-11-29 07:21:16.517114954 +0000 UTC m=+915.407476057" observedRunningTime="2025-11-29 07:21:21.184427386 +0000 UTC m=+920.074788489" watchObservedRunningTime="2025-11-29 07:21:21.18593 +0000 UTC m=+920.076291103" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.642185 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9gtns"] Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.643278 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9gtns" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.645640 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.646054 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.646999 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kcqsz" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.662178 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9gtns"] Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.716374 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5km6d\" (UniqueName: \"kubernetes.io/projected/20ce728c-657e-4255-82c8-a3c4aa7414ab-kube-api-access-5km6d\") pod \"openstack-operator-index-9gtns\" (UID: \"20ce728c-657e-4255-82c8-a3c4aa7414ab\") " pod="openstack-operators/openstack-operator-index-9gtns" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.818718 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5km6d\" (UniqueName: \"kubernetes.io/projected/20ce728c-657e-4255-82c8-a3c4aa7414ab-kube-api-access-5km6d\") pod \"openstack-operator-index-9gtns\" (UID: \"20ce728c-657e-4255-82c8-a3c4aa7414ab\") " pod="openstack-operators/openstack-operator-index-9gtns" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.827016 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.847262 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5km6d\" (UniqueName: \"kubernetes.io/projected/20ce728c-657e-4255-82c8-a3c4aa7414ab-kube-api-access-5km6d\") pod \"openstack-operator-index-9gtns\" (UID: \"20ce728c-657e-4255-82c8-a3c4aa7414ab\") " pod="openstack-operators/openstack-operator-index-9gtns" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.870447 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:23 crc kubenswrapper[4731]: I1129 07:21:23.962380 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9gtns" Nov 29 07:21:24 crc kubenswrapper[4731]: I1129 07:21:24.188177 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9gtns"] Nov 29 07:21:25 crc kubenswrapper[4731]: I1129 07:21:25.193134 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9gtns" event={"ID":"20ce728c-657e-4255-82c8-a3c4aa7414ab","Type":"ContainerStarted","Data":"d23e0740dd3d5ff256d4d010c0b7bdcdb9c2483e29e3a06c66fcb54c4e5f0946"} Nov 29 07:21:27 crc kubenswrapper[4731]: I1129 07:21:27.206406 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9gtns"] Nov 29 07:21:27 crc kubenswrapper[4731]: I1129 07:21:27.819701 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-djhck"] Nov 29 07:21:27 crc kubenswrapper[4731]: I1129 07:21:27.820891 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:27 crc kubenswrapper[4731]: I1129 07:21:27.832244 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-djhck"] Nov 29 07:21:27 crc kubenswrapper[4731]: I1129 07:21:27.878450 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbdlg\" (UniqueName: \"kubernetes.io/projected/48e66f83-bb57-46d9-89a6-cba0ad5e5fc4-kube-api-access-jbdlg\") pod \"openstack-operator-index-djhck\" (UID: \"48e66f83-bb57-46d9-89a6-cba0ad5e5fc4\") " pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:27 crc kubenswrapper[4731]: I1129 07:21:27.979920 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbdlg\" (UniqueName: \"kubernetes.io/projected/48e66f83-bb57-46d9-89a6-cba0ad5e5fc4-kube-api-access-jbdlg\") pod \"openstack-operator-index-djhck\" (UID: \"48e66f83-bb57-46d9-89a6-cba0ad5e5fc4\") " pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:28 crc kubenswrapper[4731]: I1129 07:21:28.006025 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbdlg\" (UniqueName: \"kubernetes.io/projected/48e66f83-bb57-46d9-89a6-cba0ad5e5fc4-kube-api-access-jbdlg\") pod \"openstack-operator-index-djhck\" (UID: \"48e66f83-bb57-46d9-89a6-cba0ad5e5fc4\") " pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:28 crc kubenswrapper[4731]: I1129 07:21:28.141419 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:28 crc kubenswrapper[4731]: I1129 07:21:28.218294 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9gtns" event={"ID":"20ce728c-657e-4255-82c8-a3c4aa7414ab","Type":"ContainerStarted","Data":"ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac"} Nov 29 07:21:28 crc kubenswrapper[4731]: I1129 07:21:28.218460 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9gtns" podUID="20ce728c-657e-4255-82c8-a3c4aa7414ab" containerName="registry-server" containerID="cri-o://ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac" gracePeriod=2 Nov 29 07:21:28 crc kubenswrapper[4731]: I1129 07:21:28.244695 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9gtns" podStartSLOduration=2.113341795 podStartE2EDuration="5.244663909s" podCreationTimestamp="2025-11-29 07:21:23 +0000 UTC" firstStartedPulling="2025-11-29 07:21:24.203843298 +0000 UTC m=+923.094204401" lastFinishedPulling="2025-11-29 07:21:27.335165402 +0000 UTC m=+926.225526515" observedRunningTime="2025-11-29 07:21:28.24436179 +0000 UTC m=+927.134722893" watchObservedRunningTime="2025-11-29 07:21:28.244663909 +0000 UTC m=+927.135025012" Nov 29 07:21:28 crc kubenswrapper[4731]: I1129 07:21:28.440698 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-djhck"] Nov 29 07:21:28 crc kubenswrapper[4731]: W1129 07:21:28.442439 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48e66f83_bb57_46d9_89a6_cba0ad5e5fc4.slice/crio-0a5d0b92fc7f3847d530db348088639fbbf7933ca595d5dde194c79dd14e9fea WatchSource:0}: Error finding container 0a5d0b92fc7f3847d530db348088639fbbf7933ca595d5dde194c79dd14e9fea: Status 404 returned error can't find the container with id 0a5d0b92fc7f3847d530db348088639fbbf7933ca595d5dde194c79dd14e9fea Nov 29 07:21:28 crc kubenswrapper[4731]: I1129 07:21:28.987479 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-6cjn7" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.094776 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9gtns" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.104619 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5km6d\" (UniqueName: \"kubernetes.io/projected/20ce728c-657e-4255-82c8-a3c4aa7414ab-kube-api-access-5km6d\") pod \"20ce728c-657e-4255-82c8-a3c4aa7414ab\" (UID: \"20ce728c-657e-4255-82c8-a3c4aa7414ab\") " Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.111451 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce728c-657e-4255-82c8-a3c4aa7414ab-kube-api-access-5km6d" (OuterVolumeSpecName: "kube-api-access-5km6d") pod "20ce728c-657e-4255-82c8-a3c4aa7414ab" (UID: "20ce728c-657e-4255-82c8-a3c4aa7414ab"). InnerVolumeSpecName "kube-api-access-5km6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.207130 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5km6d\" (UniqueName: \"kubernetes.io/projected/20ce728c-657e-4255-82c8-a3c4aa7414ab-kube-api-access-5km6d\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.231764 4731 generic.go:334] "Generic (PLEG): container finished" podID="20ce728c-657e-4255-82c8-a3c4aa7414ab" containerID="ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac" exitCode=0 Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.231843 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9gtns" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.231878 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9gtns" event={"ID":"20ce728c-657e-4255-82c8-a3c4aa7414ab","Type":"ContainerDied","Data":"ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac"} Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.231940 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9gtns" event={"ID":"20ce728c-657e-4255-82c8-a3c4aa7414ab","Type":"ContainerDied","Data":"d23e0740dd3d5ff256d4d010c0b7bdcdb9c2483e29e3a06c66fcb54c4e5f0946"} Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.231976 4731 scope.go:117] "RemoveContainer" containerID="ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.234015 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-djhck" event={"ID":"48e66f83-bb57-46d9-89a6-cba0ad5e5fc4","Type":"ContainerStarted","Data":"5fb9d12e6763172a30638fcf41b72e927a9918e0caa492f04bc6fc756e872701"} Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.234127 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-djhck" event={"ID":"48e66f83-bb57-46d9-89a6-cba0ad5e5fc4","Type":"ContainerStarted","Data":"0a5d0b92fc7f3847d530db348088639fbbf7933ca595d5dde194c79dd14e9fea"} Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.253908 4731 scope.go:117] "RemoveContainer" containerID="ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac" Nov 29 07:21:29 crc kubenswrapper[4731]: E1129 07:21:29.256445 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac\": container with ID starting with ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac not found: ID does not exist" containerID="ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.256520 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac"} err="failed to get container status \"ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac\": rpc error: code = NotFound desc = could not find container \"ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac\": container with ID starting with ed74224ee728c80fa34fac14b29f32d29f91555601efa958e09e2f9e6b6ef2ac not found: ID does not exist" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.260909 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-djhck" podStartSLOduration=1.9392953529999999 podStartE2EDuration="2.260876877s" podCreationTimestamp="2025-11-29 07:21:27 +0000 UTC" firstStartedPulling="2025-11-29 07:21:28.448337096 +0000 UTC m=+927.338698199" lastFinishedPulling="2025-11-29 07:21:28.76991862 +0000 UTC m=+927.660279723" observedRunningTime="2025-11-29 07:21:29.253419809 +0000 UTC m=+928.143780922" watchObservedRunningTime="2025-11-29 07:21:29.260876877 +0000 UTC m=+928.151237980" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.272226 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9gtns"] Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.277108 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9gtns"] Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.444194 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-5hdrf" Nov 29 07:21:29 crc kubenswrapper[4731]: I1129 07:21:29.821084 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce728c-657e-4255-82c8-a3c4aa7414ab" path="/var/lib/kubelet/pods/20ce728c-657e-4255-82c8-a3c4aa7414ab/volumes" Nov 29 07:21:33 crc kubenswrapper[4731]: I1129 07:21:33.002438 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:21:33 crc kubenswrapper[4731]: I1129 07:21:33.003006 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.018275 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9p4l"] Nov 29 07:21:34 crc kubenswrapper[4731]: E1129 07:21:34.018621 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ce728c-657e-4255-82c8-a3c4aa7414ab" containerName="registry-server" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.018635 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ce728c-657e-4255-82c8-a3c4aa7414ab" containerName="registry-server" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.018772 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ce728c-657e-4255-82c8-a3c4aa7414ab" containerName="registry-server" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.019738 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.028341 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9p4l"] Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.082454 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95bhg\" (UniqueName: \"kubernetes.io/projected/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-kube-api-access-95bhg\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.082675 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-catalog-content\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.082723 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-utilities\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.183833 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-catalog-content\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.183896 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-utilities\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.183946 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95bhg\" (UniqueName: \"kubernetes.io/projected/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-kube-api-access-95bhg\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.184763 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-utilities\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.184803 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-catalog-content\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.212738 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95bhg\" (UniqueName: \"kubernetes.io/projected/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-kube-api-access-95bhg\") pod \"redhat-marketplace-z9p4l\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.338998 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:34 crc kubenswrapper[4731]: I1129 07:21:34.592434 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9p4l"] Nov 29 07:21:35 crc kubenswrapper[4731]: I1129 07:21:35.278880 4731 generic.go:334] "Generic (PLEG): container finished" podID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerID="d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d" exitCode=0 Nov 29 07:21:35 crc kubenswrapper[4731]: I1129 07:21:35.279003 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9p4l" event={"ID":"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42","Type":"ContainerDied","Data":"d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d"} Nov 29 07:21:35 crc kubenswrapper[4731]: I1129 07:21:35.279387 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9p4l" event={"ID":"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42","Type":"ContainerStarted","Data":"a751ab4678e0433c4762e99212ac30a0d08094436a9d4fefbaaa8166d75e9bf1"} Nov 29 07:21:36 crc kubenswrapper[4731]: I1129 07:21:36.287752 4731 generic.go:334] "Generic (PLEG): container finished" podID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerID="63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56" exitCode=0 Nov 29 07:21:36 crc kubenswrapper[4731]: I1129 07:21:36.287799 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9p4l" event={"ID":"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42","Type":"ContainerDied","Data":"63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56"} Nov 29 07:21:37 crc kubenswrapper[4731]: I1129 07:21:37.299936 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9p4l" event={"ID":"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42","Type":"ContainerStarted","Data":"636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30"} Nov 29 07:21:37 crc kubenswrapper[4731]: I1129 07:21:37.328176 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9p4l" podStartSLOduration=1.903121805 podStartE2EDuration="3.328152589s" podCreationTimestamp="2025-11-29 07:21:34 +0000 UTC" firstStartedPulling="2025-11-29 07:21:35.281757454 +0000 UTC m=+934.172118587" lastFinishedPulling="2025-11-29 07:21:36.706788258 +0000 UTC m=+935.597149371" observedRunningTime="2025-11-29 07:21:37.326770839 +0000 UTC m=+936.217131952" watchObservedRunningTime="2025-11-29 07:21:37.328152589 +0000 UTC m=+936.218513692" Nov 29 07:21:38 crc kubenswrapper[4731]: I1129 07:21:38.142314 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:38 crc kubenswrapper[4731]: I1129 07:21:38.142392 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:38 crc kubenswrapper[4731]: I1129 07:21:38.180030 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:38 crc kubenswrapper[4731]: I1129 07:21:38.332964 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-djhck" Nov 29 07:21:38 crc kubenswrapper[4731]: I1129 07:21:38.833935 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bspvf" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.468289 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp"] Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.469769 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.471664 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9g7sd" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.480166 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp"] Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.494540 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-bundle\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.494690 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-util\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.494728 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvjbm\" (UniqueName: \"kubernetes.io/projected/24b6e41d-3fa1-413b-b3f6-8897188e619c-kube-api-access-xvjbm\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.595927 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-util\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.595993 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvjbm\" (UniqueName: \"kubernetes.io/projected/24b6e41d-3fa1-413b-b3f6-8897188e619c-kube-api-access-xvjbm\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.596070 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-bundle\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.596910 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-util\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.597337 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-bundle\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.631554 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvjbm\" (UniqueName: \"kubernetes.io/projected/24b6e41d-3fa1-413b-b3f6-8897188e619c-kube-api-access-xvjbm\") pod \"bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:40 crc kubenswrapper[4731]: I1129 07:21:40.788988 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:41 crc kubenswrapper[4731]: I1129 07:21:41.220411 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp"] Nov 29 07:21:41 crc kubenswrapper[4731]: W1129 07:21:41.226933 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b6e41d_3fa1_413b_b3f6_8897188e619c.slice/crio-939b9fa0262d34cd9fa37b4701611d1c3c29e640b2eeff624b736c84d866e256 WatchSource:0}: Error finding container 939b9fa0262d34cd9fa37b4701611d1c3c29e640b2eeff624b736c84d866e256: Status 404 returned error can't find the container with id 939b9fa0262d34cd9fa37b4701611d1c3c29e640b2eeff624b736c84d866e256 Nov 29 07:21:41 crc kubenswrapper[4731]: I1129 07:21:41.331127 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" event={"ID":"24b6e41d-3fa1-413b-b3f6-8897188e619c","Type":"ContainerStarted","Data":"939b9fa0262d34cd9fa37b4701611d1c3c29e640b2eeff624b736c84d866e256"} Nov 29 07:21:42 crc kubenswrapper[4731]: I1129 07:21:42.341391 4731 generic.go:334] "Generic (PLEG): container finished" podID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerID="2e300d1cc8fb6c054489062c2e2b7461dfafeac9d2576342ba6e38519a916f72" exitCode=0 Nov 29 07:21:42 crc kubenswrapper[4731]: I1129 07:21:42.341458 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" event={"ID":"24b6e41d-3fa1-413b-b3f6-8897188e619c","Type":"ContainerDied","Data":"2e300d1cc8fb6c054489062c2e2b7461dfafeac9d2576342ba6e38519a916f72"} Nov 29 07:21:43 crc kubenswrapper[4731]: I1129 07:21:43.351233 4731 generic.go:334] "Generic (PLEG): container finished" podID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerID="48e94ee1e4306cf41d0bc89b203e8588d9cd959e92c9fc4d1ffa3b974227cfec" exitCode=0 Nov 29 07:21:43 crc kubenswrapper[4731]: I1129 07:21:43.351295 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" event={"ID":"24b6e41d-3fa1-413b-b3f6-8897188e619c","Type":"ContainerDied","Data":"48e94ee1e4306cf41d0bc89b203e8588d9cd959e92c9fc4d1ffa3b974227cfec"} Nov 29 07:21:44 crc kubenswrapper[4731]: I1129 07:21:44.339492 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:44 crc kubenswrapper[4731]: I1129 07:21:44.340001 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:44 crc kubenswrapper[4731]: I1129 07:21:44.360256 4731 generic.go:334] "Generic (PLEG): container finished" podID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerID="c781950ea8c6ddd01b6530000f520a277908701cf8a640ba1a97278432fa6ad2" exitCode=0 Nov 29 07:21:44 crc kubenswrapper[4731]: I1129 07:21:44.360322 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" event={"ID":"24b6e41d-3fa1-413b-b3f6-8897188e619c","Type":"ContainerDied","Data":"c781950ea8c6ddd01b6530000f520a277908701cf8a640ba1a97278432fa6ad2"} Nov 29 07:21:44 crc kubenswrapper[4731]: I1129 07:21:44.385541 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:44 crc kubenswrapper[4731]: I1129 07:21:44.434428 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.208852 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9p4l"] Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.621778 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.793188 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-bundle\") pod \"24b6e41d-3fa1-413b-b3f6-8897188e619c\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.793405 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-util\") pod \"24b6e41d-3fa1-413b-b3f6-8897188e619c\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.793457 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvjbm\" (UniqueName: \"kubernetes.io/projected/24b6e41d-3fa1-413b-b3f6-8897188e619c-kube-api-access-xvjbm\") pod \"24b6e41d-3fa1-413b-b3f6-8897188e619c\" (UID: \"24b6e41d-3fa1-413b-b3f6-8897188e619c\") " Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.795543 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-bundle" (OuterVolumeSpecName: "bundle") pod "24b6e41d-3fa1-413b-b3f6-8897188e619c" (UID: "24b6e41d-3fa1-413b-b3f6-8897188e619c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.801741 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b6e41d-3fa1-413b-b3f6-8897188e619c-kube-api-access-xvjbm" (OuterVolumeSpecName: "kube-api-access-xvjbm") pod "24b6e41d-3fa1-413b-b3f6-8897188e619c" (UID: "24b6e41d-3fa1-413b-b3f6-8897188e619c"). InnerVolumeSpecName "kube-api-access-xvjbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.808920 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-util" (OuterVolumeSpecName: "util") pod "24b6e41d-3fa1-413b-b3f6-8897188e619c" (UID: "24b6e41d-3fa1-413b-b3f6-8897188e619c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.896358 4731 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.896430 4731 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24b6e41d-3fa1-413b-b3f6-8897188e619c-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4731]: I1129 07:21:45.896451 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvjbm\" (UniqueName: \"kubernetes.io/projected/24b6e41d-3fa1-413b-b3f6-8897188e619c-kube-api-access-xvjbm\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.377120 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" event={"ID":"24b6e41d-3fa1-413b-b3f6-8897188e619c","Type":"ContainerDied","Data":"939b9fa0262d34cd9fa37b4701611d1c3c29e640b2eeff624b736c84d866e256"} Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.377187 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939b9fa0262d34cd9fa37b4701611d1c3c29e640b2eeff624b736c84d866e256" Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.377148 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp" Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.377287 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9p4l" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerName="registry-server" containerID="cri-o://636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30" gracePeriod=2 Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.775268 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.910090 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-catalog-content\") pod \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.910233 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-utilities\") pod \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.910290 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95bhg\" (UniqueName: \"kubernetes.io/projected/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-kube-api-access-95bhg\") pod \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\" (UID: \"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42\") " Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.911340 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-utilities" (OuterVolumeSpecName: "utilities") pod "c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" (UID: "c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.922895 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-kube-api-access-95bhg" (OuterVolumeSpecName: "kube-api-access-95bhg") pod "c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" (UID: "c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42"). InnerVolumeSpecName "kube-api-access-95bhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:46 crc kubenswrapper[4731]: I1129 07:21:46.930822 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" (UID: "c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.011593 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95bhg\" (UniqueName: \"kubernetes.io/projected/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-kube-api-access-95bhg\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.011646 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.011659 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.390171 4731 generic.go:334] "Generic (PLEG): container finished" podID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerID="636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30" exitCode=0 Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.390339 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9p4l" event={"ID":"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42","Type":"ContainerDied","Data":"636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30"} Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.390481 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9p4l" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.390703 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9p4l" event={"ID":"c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42","Type":"ContainerDied","Data":"a751ab4678e0433c4762e99212ac30a0d08094436a9d4fefbaaa8166d75e9bf1"} Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.390888 4731 scope.go:117] "RemoveContainer" containerID="636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.413745 4731 scope.go:117] "RemoveContainer" containerID="63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.426985 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9p4l"] Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.433970 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9p4l"] Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.456159 4731 scope.go:117] "RemoveContainer" containerID="d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.476253 4731 scope.go:117] "RemoveContainer" containerID="636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30" Nov 29 07:21:47 crc kubenswrapper[4731]: E1129 07:21:47.477354 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30\": container with ID starting with 636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30 not found: ID does not exist" containerID="636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.477445 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30"} err="failed to get container status \"636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30\": rpc error: code = NotFound desc = could not find container \"636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30\": container with ID starting with 636369babbb27ad97c29d87d6f7fd8d561989b0f89aa4b14f9529ca335f49e30 not found: ID does not exist" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.477491 4731 scope.go:117] "RemoveContainer" containerID="63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56" Nov 29 07:21:47 crc kubenswrapper[4731]: E1129 07:21:47.478266 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56\": container with ID starting with 63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56 not found: ID does not exist" containerID="63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.478318 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56"} err="failed to get container status \"63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56\": rpc error: code = NotFound desc = could not find container \"63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56\": container with ID starting with 63e69d16e0fec200a2ecf3aeced1144164c0bb6186f9ae30c316ba6f5c033d56 not found: ID does not exist" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.478356 4731 scope.go:117] "RemoveContainer" containerID="d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d" Nov 29 07:21:47 crc kubenswrapper[4731]: E1129 07:21:47.480016 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d\": container with ID starting with d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d not found: ID does not exist" containerID="d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.480056 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d"} err="failed to get container status \"d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d\": rpc error: code = NotFound desc = could not find container \"d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d\": container with ID starting with d2a1e33e4e5a6bde8bc902c0e9b7aa0b8d57759559f04e76d91706ae8d02b04d not found: ID does not exist" Nov 29 07:21:47 crc kubenswrapper[4731]: I1129 07:21:47.817953 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" path="/var/lib/kubelet/pods/c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42/volumes" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.405682 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd"] Nov 29 07:21:49 crc kubenswrapper[4731]: E1129 07:21:49.406366 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerName="extract-content" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.406385 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerName="extract-content" Nov 29 07:21:49 crc kubenswrapper[4731]: E1129 07:21:49.406397 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerName="util" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.406404 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerName="util" Nov 29 07:21:49 crc kubenswrapper[4731]: E1129 07:21:49.406419 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerName="pull" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.406428 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerName="pull" Nov 29 07:21:49 crc kubenswrapper[4731]: E1129 07:21:49.406445 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerName="extract-utilities" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.406453 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerName="extract-utilities" Nov 29 07:21:49 crc kubenswrapper[4731]: E1129 07:21:49.406463 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerName="registry-server" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.406469 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerName="registry-server" Nov 29 07:21:49 crc kubenswrapper[4731]: E1129 07:21:49.406485 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerName="extract" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.406494 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerName="extract" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.406655 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61efc90-710c-4f2e-8bcb-dd5a9d9e9b42" containerName="registry-server" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.406671 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b6e41d-3fa1-413b-b3f6-8897188e619c" containerName="extract" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.408979 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.412636 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-9lwhc" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.428695 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd"] Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.551424 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6h8\" (UniqueName: \"kubernetes.io/projected/ba39c5c8-559c-4ebb-a4bd-6dc55af61842-kube-api-access-5w6h8\") pod \"openstack-operator-controller-operator-7d6594489c-kzcpd\" (UID: \"ba39c5c8-559c-4ebb-a4bd-6dc55af61842\") " pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.653422 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6h8\" (UniqueName: \"kubernetes.io/projected/ba39c5c8-559c-4ebb-a4bd-6dc55af61842-kube-api-access-5w6h8\") pod \"openstack-operator-controller-operator-7d6594489c-kzcpd\" (UID: \"ba39c5c8-559c-4ebb-a4bd-6dc55af61842\") " pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.679227 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6h8\" (UniqueName: \"kubernetes.io/projected/ba39c5c8-559c-4ebb-a4bd-6dc55af61842-kube-api-access-5w6h8\") pod \"openstack-operator-controller-operator-7d6594489c-kzcpd\" (UID: \"ba39c5c8-559c-4ebb-a4bd-6dc55af61842\") " pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" Nov 29 07:21:49 crc kubenswrapper[4731]: I1129 07:21:49.741039 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" Nov 29 07:21:50 crc kubenswrapper[4731]: I1129 07:21:50.211823 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd"] Nov 29 07:21:50 crc kubenswrapper[4731]: I1129 07:21:50.440972 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" event={"ID":"ba39c5c8-559c-4ebb-a4bd-6dc55af61842","Type":"ContainerStarted","Data":"7a4c79dfaa6d5429c8d2b4995e411d55bcae6f0ead7750a0c81a06483a230c16"} Nov 29 07:21:54 crc kubenswrapper[4731]: I1129 07:21:54.492243 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" event={"ID":"ba39c5c8-559c-4ebb-a4bd-6dc55af61842","Type":"ContainerStarted","Data":"9dc3a0251d11a533ec5697c715364b400f947a7c9912438f0fc92aa3582981ea"} Nov 29 07:21:54 crc kubenswrapper[4731]: I1129 07:21:54.493222 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" Nov 29 07:21:54 crc kubenswrapper[4731]: I1129 07:21:54.535652 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" podStartSLOduration=1.468667005 podStartE2EDuration="5.535625209s" podCreationTimestamp="2025-11-29 07:21:49 +0000 UTC" firstStartedPulling="2025-11-29 07:21:50.218354455 +0000 UTC m=+949.108715568" lastFinishedPulling="2025-11-29 07:21:54.285312649 +0000 UTC m=+953.175673772" observedRunningTime="2025-11-29 07:21:54.531486368 +0000 UTC m=+953.421847501" watchObservedRunningTime="2025-11-29 07:21:54.535625209 +0000 UTC m=+953.425986312" Nov 29 07:21:59 crc kubenswrapper[4731]: I1129 07:21:59.744833 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7d6594489c-kzcpd" Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.003077 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.003630 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.003679 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.004398 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f623b0b449aeef3aba408365a10d9b3a882a155e1db4e4fae2a31dd92abc20ca"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.004459 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://f623b0b449aeef3aba408365a10d9b3a882a155e1db4e4fae2a31dd92abc20ca" gracePeriod=600 Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.569528 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="f623b0b449aeef3aba408365a10d9b3a882a155e1db4e4fae2a31dd92abc20ca" exitCode=0 Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.569629 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"f623b0b449aeef3aba408365a10d9b3a882a155e1db4e4fae2a31dd92abc20ca"} Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.570396 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"ffbb4b4de78b7f58bb4f619008eb50ea899385afddcd0542f0d2036acafe5584"} Nov 29 07:22:03 crc kubenswrapper[4731]: I1129 07:22:03.570461 4731 scope.go:117] "RemoveContainer" containerID="e832e039d354d93ddba7480e0f594057afe8bf56de6979a0d3b6a9d2c9d3121e" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.187276 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gs57p"] Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.189351 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.212366 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gs57p"] Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.233481 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-utilities\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.233541 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-catalog-content\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.233683 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsq7p\" (UniqueName: \"kubernetes.io/projected/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-kube-api-access-gsq7p\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.335120 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsq7p\" (UniqueName: \"kubernetes.io/projected/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-kube-api-access-gsq7p\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.335250 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-utilities\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.335286 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-catalog-content\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.335905 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-catalog-content\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.336488 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-utilities\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.356757 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsq7p\" (UniqueName: \"kubernetes.io/projected/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-kube-api-access-gsq7p\") pod \"certified-operators-gs57p\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.505930 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:11 crc kubenswrapper[4731]: I1129 07:22:11.947039 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gs57p"] Nov 29 07:22:12 crc kubenswrapper[4731]: I1129 07:22:12.646525 4731 generic.go:334] "Generic (PLEG): container finished" podID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerID="1b55a4ee92926165c6e5a05ef9a19d16cb096400df440f084686abdca21eccbe" exitCode=0 Nov 29 07:22:12 crc kubenswrapper[4731]: I1129 07:22:12.646636 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs57p" event={"ID":"7f2bde7d-7615-4f9d-ac5b-e65415c0d078","Type":"ContainerDied","Data":"1b55a4ee92926165c6e5a05ef9a19d16cb096400df440f084686abdca21eccbe"} Nov 29 07:22:12 crc kubenswrapper[4731]: I1129 07:22:12.646853 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs57p" event={"ID":"7f2bde7d-7615-4f9d-ac5b-e65415c0d078","Type":"ContainerStarted","Data":"1b8d85b83ce2b09933fdd88a35621db7017ee6abd4e9d0c320801c04d2f8e55a"} Nov 29 07:22:13 crc kubenswrapper[4731]: I1129 07:22:13.656477 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs57p" event={"ID":"7f2bde7d-7615-4f9d-ac5b-e65415c0d078","Type":"ContainerStarted","Data":"dd33a018284eda34a7e2f81f599c132e462f101b6b8240e95ad85b7308bf5284"} Nov 29 07:22:14 crc kubenswrapper[4731]: I1129 07:22:14.666656 4731 generic.go:334] "Generic (PLEG): container finished" podID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerID="dd33a018284eda34a7e2f81f599c132e462f101b6b8240e95ad85b7308bf5284" exitCode=0 Nov 29 07:22:14 crc kubenswrapper[4731]: I1129 07:22:14.666760 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs57p" event={"ID":"7f2bde7d-7615-4f9d-ac5b-e65415c0d078","Type":"ContainerDied","Data":"dd33a018284eda34a7e2f81f599c132e462f101b6b8240e95ad85b7308bf5284"} Nov 29 07:22:15 crc kubenswrapper[4731]: I1129 07:22:15.688918 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs57p" event={"ID":"7f2bde7d-7615-4f9d-ac5b-e65415c0d078","Type":"ContainerStarted","Data":"d38a796f10462cd5a9dff453131bec63dca7aaee814e77dea74627a72a04825f"} Nov 29 07:22:15 crc kubenswrapper[4731]: I1129 07:22:15.712143 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gs57p" podStartSLOduration=2.12820383 podStartE2EDuration="4.712121273s" podCreationTimestamp="2025-11-29 07:22:11 +0000 UTC" firstStartedPulling="2025-11-29 07:22:12.648135859 +0000 UTC m=+971.538496962" lastFinishedPulling="2025-11-29 07:22:15.232053302 +0000 UTC m=+974.122414405" observedRunningTime="2025-11-29 07:22:15.710513626 +0000 UTC m=+974.600874729" watchObservedRunningTime="2025-11-29 07:22:15.712121273 +0000 UTC m=+974.602482376" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.195795 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.197284 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.199960 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-78kjl" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.210371 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.211946 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.214313 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cqh72" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.225984 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.235174 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.263859 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.267987 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.274109 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nxtl8" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.277662 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.279101 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.282910 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-49gm6" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.285002 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.334855 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.360731 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vm6\" (UniqueName: \"kubernetes.io/projected/3d80a1f9-6d6a-41e1-acee-640ffc57a440-kube-api-access-j2vm6\") pod \"glance-operator-controller-manager-668d9c48b9-mc6kc\" (UID: \"3d80a1f9-6d6a-41e1-acee-640ffc57a440\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.360817 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7r92\" (UniqueName: \"kubernetes.io/projected/9292e72f-2b6c-4a88-9a75-e8f55cda383a-kube-api-access-h7r92\") pod \"cinder-operator-controller-manager-859b6ccc6-97dng\" (UID: \"9292e72f-2b6c-4a88-9a75-e8f55cda383a\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.360888 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxww\" (UniqueName: \"kubernetes.io/projected/9e9c951b-fd4b-408d-a01c-0288201c0227-kube-api-access-fdxww\") pod \"barbican-operator-controller-manager-7d9dfd778-cdjtg\" (UID: \"9e9c951b-fd4b-408d-a01c-0288201c0227\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.360906 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsp6c\" (UniqueName: \"kubernetes.io/projected/9fef0c9a-6dd7-4034-99c0-68409ad7d697-kube-api-access-lsp6c\") pod \"designate-operator-controller-manager-78b4bc895b-vftx4\" (UID: \"9fef0c9a-6dd7-4034-99c0-68409ad7d697\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.372260 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.373724 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.378619 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k7j4f" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.400515 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.402306 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.405606 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qkzbg" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.414091 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.429246 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-xs92w"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.430548 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.434844 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-49c7r" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.437118 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.440974 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.463175 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdxww\" (UniqueName: \"kubernetes.io/projected/9e9c951b-fd4b-408d-a01c-0288201c0227-kube-api-access-fdxww\") pod \"barbican-operator-controller-manager-7d9dfd778-cdjtg\" (UID: \"9e9c951b-fd4b-408d-a01c-0288201c0227\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.463250 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsp6c\" (UniqueName: \"kubernetes.io/projected/9fef0c9a-6dd7-4034-99c0-68409ad7d697-kube-api-access-lsp6c\") pod \"designate-operator-controller-manager-78b4bc895b-vftx4\" (UID: \"9fef0c9a-6dd7-4034-99c0-68409ad7d697\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.463292 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2vm6\" (UniqueName: \"kubernetes.io/projected/3d80a1f9-6d6a-41e1-acee-640ffc57a440-kube-api-access-j2vm6\") pod \"glance-operator-controller-manager-668d9c48b9-mc6kc\" (UID: \"3d80a1f9-6d6a-41e1-acee-640ffc57a440\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.463367 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7r92\" (UniqueName: \"kubernetes.io/projected/9292e72f-2b6c-4a88-9a75-e8f55cda383a-kube-api-access-h7r92\") pod \"cinder-operator-controller-manager-859b6ccc6-97dng\" (UID: \"9292e72f-2b6c-4a88-9a75-e8f55cda383a\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.471166 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-xs92w"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.484731 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.486183 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.500312 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.500496 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsp6c\" (UniqueName: \"kubernetes.io/projected/9fef0c9a-6dd7-4034-99c0-68409ad7d697-kube-api-access-lsp6c\") pod \"designate-operator-controller-manager-78b4bc895b-vftx4\" (UID: \"9fef0c9a-6dd7-4034-99c0-68409ad7d697\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.501479 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2vm6\" (UniqueName: \"kubernetes.io/projected/3d80a1f9-6d6a-41e1-acee-640ffc57a440-kube-api-access-j2vm6\") pod \"glance-operator-controller-manager-668d9c48b9-mc6kc\" (UID: \"3d80a1f9-6d6a-41e1-acee-640ffc57a440\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.505402 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vfjrq" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.508620 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdxww\" (UniqueName: \"kubernetes.io/projected/9e9c951b-fd4b-408d-a01c-0288201c0227-kube-api-access-fdxww\") pod \"barbican-operator-controller-manager-7d9dfd778-cdjtg\" (UID: \"9e9c951b-fd4b-408d-a01c-0288201c0227\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.511379 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7r92\" (UniqueName: \"kubernetes.io/projected/9292e72f-2b6c-4a88-9a75-e8f55cda383a-kube-api-access-h7r92\") pod \"cinder-operator-controller-manager-859b6ccc6-97dng\" (UID: \"9292e72f-2b6c-4a88-9a75-e8f55cda383a\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.511457 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.512906 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.515593 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-d49qk" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.530299 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.531263 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.549117 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.550392 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.552202 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.565813 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfbs\" (UniqueName: \"kubernetes.io/projected/a6c4aff1-120b-4136-851e-469ebfc6a9ea-kube-api-access-whfbs\") pod \"horizon-operator-controller-manager-68c6d99b8f-kmj72\" (UID: \"a6c4aff1-120b-4136-851e-469ebfc6a9ea\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.565917 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99fms\" (UniqueName: \"kubernetes.io/projected/77a02080-6b69-441d-a6a3-ac95c4c697fe-kube-api-access-99fms\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.565979 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxhf5\" (UniqueName: \"kubernetes.io/projected/a9cc0c44-f184-47aa-9f26-78375628a187-kube-api-access-qxhf5\") pod \"heat-operator-controller-manager-5f64f6f8bb-49fl6\" (UID: \"a9cc0c44-f184-47aa-9f26-78375628a187\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.566022 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.566652 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mqthn" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.573615 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.599910 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.631521 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.632818 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.645997 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.646372 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ktlqp" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.654738 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.672665 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.673840 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prp6\" (UniqueName: \"kubernetes.io/projected/a35b9e52-221d-4c25-82d9-46fdd8d6e5ea-kube-api-access-7prp6\") pod \"ironic-operator-controller-manager-6c548fd776-dlncb\" (UID: \"a35b9e52-221d-4c25-82d9-46fdd8d6e5ea\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.673893 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxhf5\" (UniqueName: \"kubernetes.io/projected/a9cc0c44-f184-47aa-9f26-78375628a187-kube-api-access-qxhf5\") pod \"heat-operator-controller-manager-5f64f6f8bb-49fl6\" (UID: \"a9cc0c44-f184-47aa-9f26-78375628a187\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.673922 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvc2b\" (UniqueName: \"kubernetes.io/projected/7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9-kube-api-access-pvc2b\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hdhqj\" (UID: \"7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.673950 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.673980 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7q4f\" (UniqueName: \"kubernetes.io/projected/eff57485-877d-4e3e-95a2-ffc9c5ac4f0b-kube-api-access-g7q4f\") pod \"keystone-operator-controller-manager-546d4bdf48-k89xw\" (UID: \"eff57485-877d-4e3e-95a2-ffc9c5ac4f0b\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.674012 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whfbs\" (UniqueName: \"kubernetes.io/projected/a6c4aff1-120b-4136-851e-469ebfc6a9ea-kube-api-access-whfbs\") pod \"horizon-operator-controller-manager-68c6d99b8f-kmj72\" (UID: \"a6c4aff1-120b-4136-851e-469ebfc6a9ea\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.674038 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp5f\" (UniqueName: \"kubernetes.io/projected/8b165a75-263b-42e0-9521-85bf1a15dcbf-kube-api-access-2cp5f\") pod \"manila-operator-controller-manager-6546668bfd-gc2xd\" (UID: \"8b165a75-263b-42e0-9521-85bf1a15dcbf\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.674081 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99fms\" (UniqueName: \"kubernetes.io/projected/77a02080-6b69-441d-a6a3-ac95c4c697fe-kube-api-access-99fms\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:19 crc kubenswrapper[4731]: E1129 07:22:19.674225 4731 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.674248 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" Nov 29 07:22:19 crc kubenswrapper[4731]: E1129 07:22:19.674289 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert podName:77a02080-6b69-441d-a6a3-ac95c4c697fe nodeName:}" failed. No retries permitted until 2025-11-29 07:22:20.174268943 +0000 UTC m=+979.064630046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert") pod "infra-operator-controller-manager-57548d458d-xs92w" (UID: "77a02080-6b69-441d-a6a3-ac95c4c697fe") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.707467 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rq9v5" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.715309 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.717721 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.719361 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.732742 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vbph5" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.737869 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxhf5\" (UniqueName: \"kubernetes.io/projected/a9cc0c44-f184-47aa-9f26-78375628a187-kube-api-access-qxhf5\") pod \"heat-operator-controller-manager-5f64f6f8bb-49fl6\" (UID: \"a9cc0c44-f184-47aa-9f26-78375628a187\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.744848 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.757958 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.759781 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.780071 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7prp6\" (UniqueName: \"kubernetes.io/projected/a35b9e52-221d-4c25-82d9-46fdd8d6e5ea-kube-api-access-7prp6\") pod \"ironic-operator-controller-manager-6c548fd776-dlncb\" (UID: \"a35b9e52-221d-4c25-82d9-46fdd8d6e5ea\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.780138 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvc2b\" (UniqueName: \"kubernetes.io/projected/7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9-kube-api-access-pvc2b\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hdhqj\" (UID: \"7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.780175 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9svr9" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.780203 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7q4f\" (UniqueName: \"kubernetes.io/projected/eff57485-877d-4e3e-95a2-ffc9c5ac4f0b-kube-api-access-g7q4f\") pod \"keystone-operator-controller-manager-546d4bdf48-k89xw\" (UID: \"eff57485-877d-4e3e-95a2-ffc9c5ac4f0b\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.780240 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp5f\" (UniqueName: \"kubernetes.io/projected/8b165a75-263b-42e0-9521-85bf1a15dcbf-kube-api-access-2cp5f\") pod \"manila-operator-controller-manager-6546668bfd-gc2xd\" (UID: \"8b165a75-263b-42e0-9521-85bf1a15dcbf\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.789239 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.797797 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.798806 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99fms\" (UniqueName: \"kubernetes.io/projected/77a02080-6b69-441d-a6a3-ac95c4c697fe-kube-api-access-99fms\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.800084 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.822790 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-q2p92" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.822608 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.837451 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7q4f\" (UniqueName: \"kubernetes.io/projected/eff57485-877d-4e3e-95a2-ffc9c5ac4f0b-kube-api-access-g7q4f\") pod \"keystone-operator-controller-manager-546d4bdf48-k89xw\" (UID: \"eff57485-877d-4e3e-95a2-ffc9c5ac4f0b\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.843341 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.888054 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvkq8\" (UniqueName: \"kubernetes.io/projected/573641b3-8529-4a47-a0f6-379f2838dc27-kube-api-access-pvkq8\") pod \"nova-operator-controller-manager-697bc559fc-xkgrl\" (UID: \"573641b3-8529-4a47-a0f6-379f2838dc27\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.888234 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z564g\" (UniqueName: \"kubernetes.io/projected/99aed1ca-e7d9-409c-91fa-439e52342da8-kube-api-access-z564g\") pod \"octavia-operator-controller-manager-998648c74-6mxrn\" (UID: \"99aed1ca-e7d9-409c-91fa-439e52342da8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.888281 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7p7\" (UniqueName: \"kubernetes.io/projected/8ed7bfa3-ce11-490d-80f4-acd9ca51f698-kube-api-access-pw7p7\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-kc7tq\" (UID: \"8ed7bfa3-ce11-490d-80f4-acd9ca51f698\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.905255 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7prp6\" (UniqueName: \"kubernetes.io/projected/a35b9e52-221d-4c25-82d9-46fdd8d6e5ea-kube-api-access-7prp6\") pod \"ironic-operator-controller-manager-6c548fd776-dlncb\" (UID: \"a35b9e52-221d-4c25-82d9-46fdd8d6e5ea\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.843394 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc"] Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.917494 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvc2b\" (UniqueName: \"kubernetes.io/projected/7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9-kube-api-access-pvc2b\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hdhqj\" (UID: \"7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.943295 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp5f\" (UniqueName: \"kubernetes.io/projected/8b165a75-263b-42e0-9521-85bf1a15dcbf-kube-api-access-2cp5f\") pod \"manila-operator-controller-manager-6546668bfd-gc2xd\" (UID: \"8b165a75-263b-42e0-9521-85bf1a15dcbf\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.988275 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.994307 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.995371 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bm2d\" (UniqueName: \"kubernetes.io/projected/530e0034-afa1-42a5-ae59-1f8eeb34aef0-kube-api-access-8bm2d\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.996009 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.996265 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvkq8\" (UniqueName: \"kubernetes.io/projected/573641b3-8529-4a47-a0f6-379f2838dc27-kube-api-access-pvkq8\") pod \"nova-operator-controller-manager-697bc559fc-xkgrl\" (UID: \"573641b3-8529-4a47-a0f6-379f2838dc27\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.996311 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z564g\" (UniqueName: \"kubernetes.io/projected/99aed1ca-e7d9-409c-91fa-439e52342da8-kube-api-access-z564g\") pod \"octavia-operator-controller-manager-998648c74-6mxrn\" (UID: \"99aed1ca-e7d9-409c-91fa-439e52342da8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" Nov 29 07:22:19 crc kubenswrapper[4731]: I1129 07:22:19.996409 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7p7\" (UniqueName: \"kubernetes.io/projected/8ed7bfa3-ce11-490d-80f4-acd9ca51f698-kube-api-access-pw7p7\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-kc7tq\" (UID: \"8ed7bfa3-ce11-490d-80f4-acd9ca51f698\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.020667 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.045914 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.046556 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.063583 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-44h6t" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.066020 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.078459 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whfbs\" (UniqueName: \"kubernetes.io/projected/a6c4aff1-120b-4136-851e-469ebfc6a9ea-kube-api-access-whfbs\") pod \"horizon-operator-controller-manager-68c6d99b8f-kmj72\" (UID: \"a6c4aff1-120b-4136-851e-469ebfc6a9ea\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.104787 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-zhx77"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.113659 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7p7\" (UniqueName: \"kubernetes.io/projected/8ed7bfa3-ce11-490d-80f4-acd9ca51f698-kube-api-access-pw7p7\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-kc7tq\" (UID: \"8ed7bfa3-ce11-490d-80f4-acd9ca51f698\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.114035 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.115357 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.115662 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bm2d\" (UniqueName: \"kubernetes.io/projected/530e0034-afa1-42a5-ae59-1f8eeb34aef0-kube-api-access-8bm2d\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.115759 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.115846 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn24j\" (UniqueName: \"kubernetes.io/projected/a8ecb76b-3826-4e47-920c-e0d9e3c18e38-kube-api-access-wn24j\") pod \"ovn-operator-controller-manager-b6456fdb6-jmtwc\" (UID: \"a8ecb76b-3826-4e47-920c-e0d9e3c18e38\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.118113 4731 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.118200 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert podName:530e0034-afa1-42a5-ae59-1f8eeb34aef0 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:20.618178888 +0000 UTC m=+979.508539991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" (UID: "530e0034-afa1-42a5-ae59-1f8eeb34aef0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.128371 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4gxpc" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.133791 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvkq8\" (UniqueName: \"kubernetes.io/projected/573641b3-8529-4a47-a0f6-379f2838dc27-kube-api-access-pvkq8\") pod \"nova-operator-controller-manager-697bc559fc-xkgrl\" (UID: \"573641b3-8529-4a47-a0f6-379f2838dc27\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.137771 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.145206 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-zhx77"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.147404 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z564g\" (UniqueName: \"kubernetes.io/projected/99aed1ca-e7d9-409c-91fa-439e52342da8-kube-api-access-z564g\") pod \"octavia-operator-controller-manager-998648c74-6mxrn\" (UID: \"99aed1ca-e7d9-409c-91fa-439e52342da8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.172535 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bm2d\" (UniqueName: \"kubernetes.io/projected/530e0034-afa1-42a5-ae59-1f8eeb34aef0-kube-api-access-8bm2d\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.184623 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.195454 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.196448 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.196594 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.197300 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.205021 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-j4qtq" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.213173 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.217254 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.218081 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v28z8\" (UniqueName: \"kubernetes.io/projected/f7c082ea-a878-4069-9c48-96d4210f909a-kube-api-access-v28z8\") pod \"placement-operator-controller-manager-78f8948974-zhx77\" (UID: \"f7c082ea-a878-4069-9c48-96d4210f909a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.218176 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.218250 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn24j\" (UniqueName: \"kubernetes.io/projected/a8ecb76b-3826-4e47-920c-e0d9e3c18e38-kube-api-access-wn24j\") pod \"ovn-operator-controller-manager-b6456fdb6-jmtwc\" (UID: \"a8ecb76b-3826-4e47-920c-e0d9e3c18e38\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.218750 4731 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.218798 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert podName:77a02080-6b69-441d-a6a3-ac95c4c697fe nodeName:}" failed. No retries permitted until 2025-11-29 07:22:21.218780496 +0000 UTC m=+980.109141599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert") pod "infra-operator-controller-manager-57548d458d-xs92w" (UID: "77a02080-6b69-441d-a6a3-ac95c4c697fe") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.218853 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.227874 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sbjpn" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.232438 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.234251 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.239118 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vmqs7" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.240894 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c7gv2" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.255926 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.267768 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.290672 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.292527 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.295540 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn24j\" (UniqueName: \"kubernetes.io/projected/a8ecb76b-3826-4e47-920c-e0d9e3c18e38-kube-api-access-wn24j\") pod \"ovn-operator-controller-manager-b6456fdb6-jmtwc\" (UID: \"a8ecb76b-3826-4e47-920c-e0d9e3c18e38\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.321111 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztq59\" (UniqueName: \"kubernetes.io/projected/9c9a7893-7770-49ae-8a0f-44168941a55b-kube-api-access-ztq59\") pod \"swift-operator-controller-manager-5f8c65bbfc-zvvpp\" (UID: \"9c9a7893-7770-49ae-8a0f-44168941a55b\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.321403 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp82q\" (UniqueName: \"kubernetes.io/projected/cf83d1d1-1d33-4905-ae31-038a7afbd230-kube-api-access-bp82q\") pod \"telemetry-operator-controller-manager-c98454947-cq6kc\" (UID: \"cf83d1d1-1d33-4905-ae31-038a7afbd230\") " pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.321733 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfmt4\" (UniqueName: \"kubernetes.io/projected/5e2ef3fa-22be-4ac5-9cce-09227be5538b-kube-api-access-gfmt4\") pod \"test-operator-controller-manager-5854674fcc-fnkt5\" (UID: \"5e2ef3fa-22be-4ac5-9cce-09227be5538b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.322324 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd94k\" (UniqueName: \"kubernetes.io/projected/9d6f5aa5-06c2-4196-a217-68aa690b6e7f-kube-api-access-nd94k\") pod \"watcher-operator-controller-manager-769dc69bc-tbxj8\" (UID: \"9d6f5aa5-06c2-4196-a217-68aa690b6e7f\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.322458 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v28z8\" (UniqueName: \"kubernetes.io/projected/f7c082ea-a878-4069-9c48-96d4210f909a-kube-api-access-v28z8\") pod \"placement-operator-controller-manager-78f8948974-zhx77\" (UID: \"f7c082ea-a878-4069-9c48-96d4210f909a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.341168 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.354363 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.355735 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.369552 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.369965 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.370433 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-56qrr" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.372179 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.383126 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v28z8\" (UniqueName: \"kubernetes.io/projected/f7c082ea-a878-4069-9c48-96d4210f909a-kube-api-access-v28z8\") pod \"placement-operator-controller-manager-78f8948974-zhx77\" (UID: \"f7c082ea-a878-4069-9c48-96d4210f909a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.424861 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp82q\" (UniqueName: \"kubernetes.io/projected/cf83d1d1-1d33-4905-ae31-038a7afbd230-kube-api-access-bp82q\") pod \"telemetry-operator-controller-manager-c98454947-cq6kc\" (UID: \"cf83d1d1-1d33-4905-ae31-038a7afbd230\") " pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.425164 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.425286 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.425396 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7t7s\" (UniqueName: \"kubernetes.io/projected/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-kube-api-access-g7t7s\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.425535 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfmt4\" (UniqueName: \"kubernetes.io/projected/5e2ef3fa-22be-4ac5-9cce-09227be5538b-kube-api-access-gfmt4\") pod \"test-operator-controller-manager-5854674fcc-fnkt5\" (UID: \"5e2ef3fa-22be-4ac5-9cce-09227be5538b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.427342 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd94k\" (UniqueName: \"kubernetes.io/projected/9d6f5aa5-06c2-4196-a217-68aa690b6e7f-kube-api-access-nd94k\") pod \"watcher-operator-controller-manager-769dc69bc-tbxj8\" (UID: \"9d6f5aa5-06c2-4196-a217-68aa690b6e7f\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.427650 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztq59\" (UniqueName: \"kubernetes.io/projected/9c9a7893-7770-49ae-8a0f-44168941a55b-kube-api-access-ztq59\") pod \"swift-operator-controller-manager-5f8c65bbfc-zvvpp\" (UID: \"9c9a7893-7770-49ae-8a0f-44168941a55b\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.453865 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp82q\" (UniqueName: \"kubernetes.io/projected/cf83d1d1-1d33-4905-ae31-038a7afbd230-kube-api-access-bp82q\") pod \"telemetry-operator-controller-manager-c98454947-cq6kc\" (UID: \"cf83d1d1-1d33-4905-ae31-038a7afbd230\") " pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.472976 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.475354 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztq59\" (UniqueName: \"kubernetes.io/projected/9c9a7893-7770-49ae-8a0f-44168941a55b-kube-api-access-ztq59\") pod \"swift-operator-controller-manager-5f8c65bbfc-zvvpp\" (UID: \"9c9a7893-7770-49ae-8a0f-44168941a55b\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.475994 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd94k\" (UniqueName: \"kubernetes.io/projected/9d6f5aa5-06c2-4196-a217-68aa690b6e7f-kube-api-access-nd94k\") pod \"watcher-operator-controller-manager-769dc69bc-tbxj8\" (UID: \"9d6f5aa5-06c2-4196-a217-68aa690b6e7f\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.486930 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfmt4\" (UniqueName: \"kubernetes.io/projected/5e2ef3fa-22be-4ac5-9cce-09227be5538b-kube-api-access-gfmt4\") pod \"test-operator-controller-manager-5854674fcc-fnkt5\" (UID: \"5e2ef3fa-22be-4ac5-9cce-09227be5538b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.501505 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.542491 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.542606 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.542692 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7t7s\" (UniqueName: \"kubernetes.io/projected/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-kube-api-access-g7t7s\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.544811 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.546919 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.547206 4731 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.547380 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:21.047341043 +0000 UTC m=+979.937702146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "metrics-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.548363 4731 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.548497 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:21.048464935 +0000 UTC m=+979.938826228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "webhook-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.557009 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gvnzq" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.572350 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.573059 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.578601 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7t7s\" (UniqueName: \"kubernetes.io/projected/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-kube-api-access-g7t7s\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.605859 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.680990 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.711274 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.738823 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.739139 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44fl\" (UniqueName: \"kubernetes.io/projected/c448f643-f2f4-403d-b235-24ac74755cdf-kube-api-access-k44fl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9wkq4\" (UID: \"c448f643-f2f4-403d-b235-24ac74755cdf\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.739201 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.740428 4731 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: E1129 07:22:20.741533 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert podName:530e0034-afa1-42a5-ae59-1f8eeb34aef0 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:21.741497833 +0000 UTC m=+980.631858936 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" (UID: "530e0034-afa1-42a5-ae59-1f8eeb34aef0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.803502 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.842904 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k44fl\" (UniqueName: \"kubernetes.io/projected/c448f643-f2f4-403d-b235-24ac74755cdf-kube-api-access-k44fl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9wkq4\" (UID: \"c448f643-f2f4-403d-b235-24ac74755cdf\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.880407 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44fl\" (UniqueName: \"kubernetes.io/projected/c448f643-f2f4-403d-b235-24ac74755cdf-kube-api-access-k44fl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9wkq4\" (UID: \"c448f643-f2f4-403d-b235-24ac74755cdf\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.934400 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4"] Nov 29 07:22:20 crc kubenswrapper[4731]: I1129 07:22:20.941995 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.061202 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.085720 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd"] Nov 29 07:22:21 crc kubenswrapper[4731]: W1129 07:22:21.118927 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b165a75_263b_42e0_9521_85bf1a15dcbf.slice/crio-a9ff4b0f275844140c7ecd10b89be41c8f6f85bdb929b5c357ac1c500774e918 WatchSource:0}: Error finding container a9ff4b0f275844140c7ecd10b89be41c8f6f85bdb929b5c357ac1c500774e918: Status 404 returned error can't find the container with id a9ff4b0f275844140c7ecd10b89be41c8f6f85bdb929b5c357ac1c500774e918 Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.148614 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.148855 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.148795 4731 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.149028 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:22.148937943 +0000 UTC m=+981.039299046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "webhook-server-cert" not found Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.149539 4731 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.149696 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:22.149661114 +0000 UTC m=+981.040022398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "metrics-server-cert" not found Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.173170 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.254101 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.254443 4731 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.254526 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert podName:77a02080-6b69-441d-a6a3-ac95c4c697fe nodeName:}" failed. No retries permitted until 2025-11-29 07:22:23.254499136 +0000 UTC m=+982.144860239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert") pod "infra-operator-controller-manager-57548d458d-xs92w" (UID: "77a02080-6b69-441d-a6a3-ac95c4c697fe") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.335622 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.371993 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.394085 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.413444 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.469935 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.485723 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.489639 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.513515 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.513676 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.585321 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.623419 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.648024 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-zhx77"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.678199 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl"] Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.679410 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v28z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-zhx77_openstack-operators(f7c082ea-a878-4069-9c48-96d4210f909a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.680161 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvkq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-xkgrl_openstack-operators(573641b3-8529-4a47-a0f6-379f2838dc27): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.683778 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v28z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-zhx77_openstack-operators(f7c082ea-a878-4069-9c48-96d4210f909a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.684223 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvkq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-xkgrl_openstack-operators(573641b3-8529-4a47-a0f6-379f2838dc27): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.684932 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" podUID="f7c082ea-a878-4069-9c48-96d4210f909a" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.686805 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" podUID="573641b3-8529-4a47-a0f6-379f2838dc27" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.713911 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4"] Nov 29 07:22:21 crc kubenswrapper[4731]: W1129 07:22:21.722837 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc448f643_f2f4_403d_b235_24ac74755cdf.slice/crio-03e02f2eeeb1c75d7577546944e2d4c34eb28689ae33119dcaa11943696ba1b6 WatchSource:0}: Error finding container 03e02f2eeeb1c75d7577546944e2d4c34eb28689ae33119dcaa11943696ba1b6: Status 404 returned error can't find the container with id 03e02f2eeeb1c75d7577546944e2d4c34eb28689ae33119dcaa11943696ba1b6 Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.729039 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k44fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9wkq4_openstack-operators(c448f643-f2f4-403d-b235-24ac74755cdf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.730797 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" podUID="c448f643-f2f4-403d-b235-24ac74755cdf" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.766709 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.767081 4731 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.767152 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert podName:530e0034-afa1-42a5-ae59-1f8eeb34aef0 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:23.767129469 +0000 UTC m=+982.657490572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" (UID: "530e0034-afa1-42a5-ae59-1f8eeb34aef0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.791497 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8"] Nov 29 07:22:21 crc kubenswrapper[4731]: W1129 07:22:21.797961 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d6f5aa5_06c2_4196_a217_68aa690b6e7f.slice/crio-3a754665a1c3afbcdb375b266f47900da3aad7b54a770037038f732cee38f386 WatchSource:0}: Error finding container 3a754665a1c3afbcdb375b266f47900da3aad7b54a770037038f732cee38f386: Status 404 returned error can't find the container with id 3a754665a1c3afbcdb375b266f47900da3aad7b54a770037038f732cee38f386 Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.798506 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.806155 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" event={"ID":"a35b9e52-221d-4c25-82d9-46fdd8d6e5ea","Type":"ContainerStarted","Data":"3be4f6f0ac1e7d59d9bb0a98c70cfa26acd842b2ffadc3c9ad6514aad4c51daf"} Nov 29 07:22:21 crc kubenswrapper[4731]: W1129 07:22:21.817944 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e2ef3fa_22be_4ac5_9cce_09227be5538b.slice/crio-f0fa3caf3da9cabe352691f1cd299f6e47871a051ca07e62b586c23b4c81dd64 WatchSource:0}: Error finding container f0fa3caf3da9cabe352691f1cd299f6e47871a051ca07e62b586c23b4c81dd64: Status 404 returned error can't find the container with id f0fa3caf3da9cabe352691f1cd299f6e47871a051ca07e62b586c23b4c81dd64 Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.824321 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" event={"ID":"f7c082ea-a878-4069-9c48-96d4210f909a","Type":"ContainerStarted","Data":"686f5dbee1de07709cf36de5cc34309ae41f10cbd19ad68a3b93305ac5005399"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.824372 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" event={"ID":"9e9c951b-fd4b-408d-a01c-0288201c0227","Type":"ContainerStarted","Data":"e24c2fa2a6cd69c32634313bc8f3075415698baced4ed42be904d485f626b15d"} Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.827155 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-fnkt5_openstack-operators(5e2ef3fa-22be-4ac5-9cce-09227be5538b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.827389 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:4dd495fa0010c74f023c0a9e9a9ae698b8ce4d09,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bp82q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-c98454947-cq6kc_openstack-operators(cf83d1d1-1d33-4905-ae31-038a7afbd230): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.827901 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" event={"ID":"c448f643-f2f4-403d-b235-24ac74755cdf","Type":"ContainerStarted","Data":"03e02f2eeeb1c75d7577546944e2d4c34eb28689ae33119dcaa11943696ba1b6"} Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.830664 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bp82q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-c98454947-cq6kc_openstack-operators(cf83d1d1-1d33-4905-ae31-038a7afbd230): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.832156 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" podUID="cf83d1d1-1d33-4905-ae31-038a7afbd230" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.845959 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" podUID="c448f643-f2f4-403d-b235-24ac74755cdf" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.846181 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" podUID="f7c082ea-a878-4069-9c48-96d4210f909a" Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.846445 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-fnkt5_openstack-operators(5e2ef3fa-22be-4ac5-9cce-09227be5538b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.847096 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" event={"ID":"a6c4aff1-120b-4136-851e-469ebfc6a9ea","Type":"ContainerStarted","Data":"c05505cf46a391bd4b9e12ad68f19a57953550c349596e17a46268fad11c039e"} Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.847855 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" podUID="5e2ef3fa-22be-4ac5-9cce-09227be5538b" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.877045 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc"] Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.900137 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" event={"ID":"eff57485-877d-4e3e-95a2-ffc9c5ac4f0b","Type":"ContainerStarted","Data":"5717e8f5625e600f8dd564997a670aae02f9412bd249a22752d8d45f08635e86"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.924000 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" event={"ID":"a9cc0c44-f184-47aa-9f26-78375628a187","Type":"ContainerStarted","Data":"3e2df66bf5b2f542b31daa9745395db38d29284b7db7a18beebca02c51313be0"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.945597 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" event={"ID":"9292e72f-2b6c-4a88-9a75-e8f55cda383a","Type":"ContainerStarted","Data":"511b68e64970d75ef9b7709bb9d69032dc592776e0fa87dc157edafe5f91b9cb"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.952454 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" event={"ID":"7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9","Type":"ContainerStarted","Data":"80c5dbe170bd94f93ff86d67a4abc5622439a66c40bb69933c58af1b15fe7d36"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.958520 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" event={"ID":"573641b3-8529-4a47-a0f6-379f2838dc27","Type":"ContainerStarted","Data":"d1e5a2e06dff97041a7de072d188e029743cd95328c7e219107159dd51dcef21"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.965255 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" event={"ID":"a8ecb76b-3826-4e47-920c-e0d9e3c18e38","Type":"ContainerStarted","Data":"8c75e31f3fe4b2d4cdca63a51d0d714398785557dda7c9af71a3e7800c783d6a"} Nov 29 07:22:21 crc kubenswrapper[4731]: E1129 07:22:21.967523 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" podUID="573641b3-8529-4a47-a0f6-379f2838dc27" Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.970470 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" event={"ID":"3d80a1f9-6d6a-41e1-acee-640ffc57a440","Type":"ContainerStarted","Data":"0e4d07a2b36269847e3354d0b83e1327eefbb76492fa48f0b17bdba00080ad70"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.973313 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" event={"ID":"9c9a7893-7770-49ae-8a0f-44168941a55b","Type":"ContainerStarted","Data":"a7d32802a2d870098662a243977e3c2086159c402ef7a477c4d11be6766ae6ac"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.975116 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" event={"ID":"8b165a75-263b-42e0-9521-85bf1a15dcbf","Type":"ContainerStarted","Data":"a9ff4b0f275844140c7ecd10b89be41c8f6f85bdb929b5c357ac1c500774e918"} Nov 29 07:22:21 crc kubenswrapper[4731]: I1129 07:22:21.993323 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" event={"ID":"9fef0c9a-6dd7-4034-99c0-68409ad7d697","Type":"ContainerStarted","Data":"a28b1c0054debb5123787b52cadbd1c3cea686cce2543373558389a35ef2a568"} Nov 29 07:22:22 crc kubenswrapper[4731]: I1129 07:22:22.003991 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" event={"ID":"99aed1ca-e7d9-409c-91fa-439e52342da8","Type":"ContainerStarted","Data":"6fa33f299c5094c5596ac1206df0e011c8f9bc57bb8bee055492ad24aefb4c44"} Nov 29 07:22:22 crc kubenswrapper[4731]: I1129 07:22:22.011534 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" event={"ID":"8ed7bfa3-ce11-490d-80f4-acd9ca51f698","Type":"ContainerStarted","Data":"8af46dd50c7ff81984b28d1233e4f60abe9736ee2e0356b23d843f9c3b55a99d"} Nov 29 07:22:22 crc kubenswrapper[4731]: I1129 07:22:22.088062 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:22 crc kubenswrapper[4731]: I1129 07:22:22.179164 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:22 crc kubenswrapper[4731]: E1129 07:22:22.179364 4731 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:22:22 crc kubenswrapper[4731]: E1129 07:22:22.179424 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:24.1794037 +0000 UTC m=+983.069764803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "webhook-server-cert" not found Nov 29 07:22:22 crc kubenswrapper[4731]: I1129 07:22:22.180040 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:22 crc kubenswrapper[4731]: E1129 07:22:22.180225 4731 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:22:22 crc kubenswrapper[4731]: E1129 07:22:22.180252 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:24.180243234 +0000 UTC m=+983.070604337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "metrics-server-cert" not found Nov 29 07:22:22 crc kubenswrapper[4731]: I1129 07:22:22.394994 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gs57p"] Nov 29 07:22:23 crc kubenswrapper[4731]: I1129 07:22:23.020896 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" event={"ID":"5e2ef3fa-22be-4ac5-9cce-09227be5538b","Type":"ContainerStarted","Data":"f0fa3caf3da9cabe352691f1cd299f6e47871a051ca07e62b586c23b4c81dd64"} Nov 29 07:22:23 crc kubenswrapper[4731]: I1129 07:22:23.029749 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" event={"ID":"cf83d1d1-1d33-4905-ae31-038a7afbd230","Type":"ContainerStarted","Data":"9c03815f525cc3a1c279c8ce10a5b7a56ae72edf8380e144072a39ba59d682fc"} Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.030236 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" podUID="5e2ef3fa-22be-4ac5-9cce-09227be5538b" Nov 29 07:22:23 crc kubenswrapper[4731]: I1129 07:22:23.032223 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" event={"ID":"9d6f5aa5-06c2-4196-a217-68aa690b6e7f","Type":"ContainerStarted","Data":"3a754665a1c3afbcdb375b266f47900da3aad7b54a770037038f732cee38f386"} Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.034966 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:4dd495fa0010c74f023c0a9e9a9ae698b8ce4d09\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" podUID="cf83d1d1-1d33-4905-ae31-038a7afbd230" Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.047826 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" podUID="c448f643-f2f4-403d-b235-24ac74755cdf" Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.056477 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" podUID="573641b3-8529-4a47-a0f6-379f2838dc27" Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.091837 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" podUID="f7c082ea-a878-4069-9c48-96d4210f909a" Nov 29 07:22:23 crc kubenswrapper[4731]: I1129 07:22:23.311661 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.312170 4731 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.312232 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert podName:77a02080-6b69-441d-a6a3-ac95c4c697fe nodeName:}" failed. No retries permitted until 2025-11-29 07:22:27.312213945 +0000 UTC m=+986.202575048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert") pod "infra-operator-controller-manager-57548d458d-xs92w" (UID: "77a02080-6b69-441d-a6a3-ac95c4c697fe") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:23 crc kubenswrapper[4731]: I1129 07:22:23.820691 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.820998 4731 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:23 crc kubenswrapper[4731]: E1129 07:22:23.821121 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert podName:530e0034-afa1-42a5-ae59-1f8eeb34aef0 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:27.821093407 +0000 UTC m=+986.711454510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" (UID: "530e0034-afa1-42a5-ae59-1f8eeb34aef0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:24 crc kubenswrapper[4731]: I1129 07:22:24.047112 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gs57p" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerName="registry-server" containerID="cri-o://d38a796f10462cd5a9dff453131bec63dca7aaee814e77dea74627a72a04825f" gracePeriod=2 Nov 29 07:22:24 crc kubenswrapper[4731]: E1129 07:22:24.052682 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" podUID="5e2ef3fa-22be-4ac5-9cce-09227be5538b" Nov 29 07:22:24 crc kubenswrapper[4731]: E1129 07:22:24.053728 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:4dd495fa0010c74f023c0a9e9a9ae698b8ce4d09\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" podUID="cf83d1d1-1d33-4905-ae31-038a7afbd230" Nov 29 07:22:24 crc kubenswrapper[4731]: I1129 07:22:24.237160 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:24 crc kubenswrapper[4731]: I1129 07:22:24.237266 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:24 crc kubenswrapper[4731]: E1129 07:22:24.237503 4731 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:22:24 crc kubenswrapper[4731]: E1129 07:22:24.237640 4731 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:22:24 crc kubenswrapper[4731]: E1129 07:22:24.237752 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:28.237724766 +0000 UTC m=+987.128085879 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "metrics-server-cert" not found Nov 29 07:22:24 crc kubenswrapper[4731]: E1129 07:22:24.237791 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:28.237782007 +0000 UTC m=+987.128143110 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "webhook-server-cert" not found Nov 29 07:22:26 crc kubenswrapper[4731]: I1129 07:22:26.070312 4731 generic.go:334] "Generic (PLEG): container finished" podID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerID="d38a796f10462cd5a9dff453131bec63dca7aaee814e77dea74627a72a04825f" exitCode=0 Nov 29 07:22:26 crc kubenswrapper[4731]: I1129 07:22:26.070489 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs57p" event={"ID":"7f2bde7d-7615-4f9d-ac5b-e65415c0d078","Type":"ContainerDied","Data":"d38a796f10462cd5a9dff453131bec63dca7aaee814e77dea74627a72a04825f"} Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.313392 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:27 crc kubenswrapper[4731]: E1129 07:22:27.313620 4731 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:27 crc kubenswrapper[4731]: E1129 07:22:27.314049 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert podName:77a02080-6b69-441d-a6a3-ac95c4c697fe nodeName:}" failed. No retries permitted until 2025-11-29 07:22:35.314022063 +0000 UTC m=+994.204383166 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert") pod "infra-operator-controller-manager-57548d458d-xs92w" (UID: "77a02080-6b69-441d-a6a3-ac95c4c697fe") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.504254 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.516198 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-catalog-content\") pod \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.516250 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-utilities\") pod \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.516334 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsq7p\" (UniqueName: \"kubernetes.io/projected/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-kube-api-access-gsq7p\") pod \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\" (UID: \"7f2bde7d-7615-4f9d-ac5b-e65415c0d078\") " Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.517307 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-utilities" (OuterVolumeSpecName: "utilities") pod "7f2bde7d-7615-4f9d-ac5b-e65415c0d078" (UID: "7f2bde7d-7615-4f9d-ac5b-e65415c0d078"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.524789 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-kube-api-access-gsq7p" (OuterVolumeSpecName: "kube-api-access-gsq7p") pod "7f2bde7d-7615-4f9d-ac5b-e65415c0d078" (UID: "7f2bde7d-7615-4f9d-ac5b-e65415c0d078"). InnerVolumeSpecName "kube-api-access-gsq7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.593214 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f2bde7d-7615-4f9d-ac5b-e65415c0d078" (UID: "7f2bde7d-7615-4f9d-ac5b-e65415c0d078"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.618110 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.618159 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.618178 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsq7p\" (UniqueName: \"kubernetes.io/projected/7f2bde7d-7615-4f9d-ac5b-e65415c0d078-kube-api-access-gsq7p\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:27 crc kubenswrapper[4731]: I1129 07:22:27.922499 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:27 crc kubenswrapper[4731]: E1129 07:22:27.922761 4731 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:27 crc kubenswrapper[4731]: E1129 07:22:27.922873 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert podName:530e0034-afa1-42a5-ae59-1f8eeb34aef0 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:35.922844985 +0000 UTC m=+994.813206088 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" (UID: "530e0034-afa1-42a5-ae59-1f8eeb34aef0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:22:28 crc kubenswrapper[4731]: I1129 07:22:28.089915 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs57p" event={"ID":"7f2bde7d-7615-4f9d-ac5b-e65415c0d078","Type":"ContainerDied","Data":"1b8d85b83ce2b09933fdd88a35621db7017ee6abd4e9d0c320801c04d2f8e55a"} Nov 29 07:22:28 crc kubenswrapper[4731]: I1129 07:22:28.089985 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs57p" Nov 29 07:22:28 crc kubenswrapper[4731]: I1129 07:22:28.090001 4731 scope.go:117] "RemoveContainer" containerID="d38a796f10462cd5a9dff453131bec63dca7aaee814e77dea74627a72a04825f" Nov 29 07:22:28 crc kubenswrapper[4731]: I1129 07:22:28.117043 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gs57p"] Nov 29 07:22:28 crc kubenswrapper[4731]: I1129 07:22:28.123074 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gs57p"] Nov 29 07:22:28 crc kubenswrapper[4731]: I1129 07:22:28.329870 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:28 crc kubenswrapper[4731]: I1129 07:22:28.329923 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:28 crc kubenswrapper[4731]: E1129 07:22:28.330112 4731 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:22:28 crc kubenswrapper[4731]: E1129 07:22:28.330182 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:36.330161911 +0000 UTC m=+995.220523014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "metrics-server-cert" not found Nov 29 07:22:28 crc kubenswrapper[4731]: E1129 07:22:28.330176 4731 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:22:28 crc kubenswrapper[4731]: E1129 07:22:28.330302 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs podName:fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:22:36.330265004 +0000 UTC m=+995.220626277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs") pod "openstack-operator-controller-manager-76c96f5dc5-hsjk8" (UID: "fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9") : secret "webhook-server-cert" not found Nov 29 07:22:29 crc kubenswrapper[4731]: I1129 07:22:29.819659 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" path="/var/lib/kubelet/pods/7f2bde7d-7615-4f9d-ac5b-e65415c0d078/volumes" Nov 29 07:22:34 crc kubenswrapper[4731]: E1129 07:22:34.091118 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5" Nov 29 07:22:34 crc kubenswrapper[4731]: E1129 07:22:34.091797 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2cp5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-gc2xd_openstack-operators(8b165a75-263b-42e0-9521-85bf1a15dcbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:22:34 crc kubenswrapper[4731]: E1129 07:22:34.596458 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Nov 29 07:22:34 crc kubenswrapper[4731]: E1129 07:22:34.596753 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxhf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-49fl6_openstack-operators(a9cc0c44-f184-47aa-9f26-78375628a187): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:22:35 crc kubenswrapper[4731]: E1129 07:22:35.365894 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Nov 29 07:22:35 crc kubenswrapper[4731]: E1129 07:22:35.366380 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lsp6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-vftx4_openstack-operators(9fef0c9a-6dd7-4034-99c0-68409ad7d697): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:22:35 crc kubenswrapper[4731]: I1129 07:22:35.377952 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:35 crc kubenswrapper[4731]: I1129 07:22:35.404363 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77a02080-6b69-441d-a6a3-ac95c4c697fe-cert\") pod \"infra-operator-controller-manager-57548d458d-xs92w\" (UID: \"77a02080-6b69-441d-a6a3-ac95c4c697fe\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:35 crc kubenswrapper[4731]: I1129 07:22:35.663306 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:22:35 crc kubenswrapper[4731]: E1129 07:22:35.884664 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172" Nov 29 07:22:35 crc kubenswrapper[4731]: E1129 07:22:35.884961 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2vm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-668d9c48b9-mc6kc_openstack-operators(3d80a1f9-6d6a-41e1-acee-640ffc57a440): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:22:35 crc kubenswrapper[4731]: I1129 07:22:35.987028 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.011479 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/530e0034-afa1-42a5-ae59-1f8eeb34aef0-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl\" (UID: \"530e0034-afa1-42a5-ae59-1f8eeb34aef0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.172806 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.393771 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.393827 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.399177 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-webhook-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.399292 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9-metrics-certs\") pod \"openstack-operator-controller-manager-76c96f5dc5-hsjk8\" (UID: \"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9\") " pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:36 crc kubenswrapper[4731]: E1129 07:22:36.543517 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Nov 29 07:22:36 crc kubenswrapper[4731]: E1129 07:22:36.543833 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wn24j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-jmtwc_openstack-operators(a8ecb76b-3826-4e47-920c-e0d9e3c18e38): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.592594 4731 scope.go:117] "RemoveContainer" containerID="dd33a018284eda34a7e2f81f599c132e462f101b6b8240e95ad85b7308bf5284" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.631794 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:22:36 crc kubenswrapper[4731]: I1129 07:22:36.640312 4731 scope.go:117] "RemoveContainer" containerID="1b55a4ee92926165c6e5a05ef9a19d16cb096400df440f084686abdca21eccbe" Nov 29 07:22:37 crc kubenswrapper[4731]: I1129 07:22:37.110553 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-xs92w"] Nov 29 07:22:37 crc kubenswrapper[4731]: I1129 07:22:37.223014 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl"] Nov 29 07:22:38 crc kubenswrapper[4731]: I1129 07:22:38.192947 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" event={"ID":"77a02080-6b69-441d-a6a3-ac95c4c697fe","Type":"ContainerStarted","Data":"2db89766c57c5afc05aa37935e757b9fb3418da504e9bbcc1ad83e3c9c910788"} Nov 29 07:22:38 crc kubenswrapper[4731]: I1129 07:22:38.195194 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" event={"ID":"530e0034-afa1-42a5-ae59-1f8eeb34aef0","Type":"ContainerStarted","Data":"eba7815e05f25c7835010d1faefac173c144813e40e59ae52b1245ad75bfa429"} Nov 29 07:22:39 crc kubenswrapper[4731]: I1129 07:22:39.882132 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8"] Nov 29 07:22:42 crc kubenswrapper[4731]: W1129 07:22:42.812535 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbd333a8_95e7_47dc_8b2c_9ea2154d6fb9.slice/crio-802138281e4f7819af72f6fa757e9c65959fc5bcf0c5cd73b85fde6b67c3876c WatchSource:0}: Error finding container 802138281e4f7819af72f6fa757e9c65959fc5bcf0c5cd73b85fde6b67c3876c: Status 404 returned error can't find the container with id 802138281e4f7819af72f6fa757e9c65959fc5bcf0c5cd73b85fde6b67c3876c Nov 29 07:22:43 crc kubenswrapper[4731]: I1129 07:22:43.245450 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" event={"ID":"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9","Type":"ContainerStarted","Data":"802138281e4f7819af72f6fa757e9c65959fc5bcf0c5cd73b85fde6b67c3876c"} Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.599057 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4cbd6"] Nov 29 07:22:44 crc kubenswrapper[4731]: E1129 07:22:44.599907 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerName="extract-utilities" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.599928 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerName="extract-utilities" Nov 29 07:22:44 crc kubenswrapper[4731]: E1129 07:22:44.599954 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerName="registry-server" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.599963 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerName="registry-server" Nov 29 07:22:44 crc kubenswrapper[4731]: E1129 07:22:44.600021 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerName="extract-content" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.600034 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerName="extract-content" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.600258 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2bde7d-7615-4f9d-ac5b-e65415c0d078" containerName="registry-server" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.601827 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.608486 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4cbd6"] Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.785405 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-utilities\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.785557 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-catalog-content\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.785631 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpms\" (UniqueName: \"kubernetes.io/projected/bf025757-ec7f-4848-9d2f-376f774e2a83-kube-api-access-kkpms\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.887491 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-catalog-content\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.887596 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkpms\" (UniqueName: \"kubernetes.io/projected/bf025757-ec7f-4848-9d2f-376f774e2a83-kube-api-access-kkpms\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.887633 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-utilities\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.889062 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-catalog-content\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.889163 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-utilities\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:44 crc kubenswrapper[4731]: I1129 07:22:44.910776 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkpms\" (UniqueName: \"kubernetes.io/projected/bf025757-ec7f-4848-9d2f-376f774e2a83-kube-api-access-kkpms\") pod \"community-operators-4cbd6\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:45 crc kubenswrapper[4731]: I1129 07:22:45.085669 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:22:45 crc kubenswrapper[4731]: I1129 07:22:45.267611 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" event={"ID":"9e9c951b-fd4b-408d-a01c-0288201c0227","Type":"ContainerStarted","Data":"374915f3e930f0a7a9f99ca644d63b3b0b535bf0f1ec670b6beb0fc844427578"} Nov 29 07:22:45 crc kubenswrapper[4731]: I1129 07:22:45.270647 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" event={"ID":"9d6f5aa5-06c2-4196-a217-68aa690b6e7f","Type":"ContainerStarted","Data":"e30bbe70a99ea8fc10f0565448d899799930cefc8c0f4fd26ec356c312baf0ed"} Nov 29 07:22:45 crc kubenswrapper[4731]: I1129 07:22:45.273027 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" event={"ID":"9292e72f-2b6c-4a88-9a75-e8f55cda383a","Type":"ContainerStarted","Data":"e0724403f8e9c4cf8e5f78cdc8a5392f54fa5dc1bb84c14c20dab10331eba67a"} Nov 29 07:22:45 crc kubenswrapper[4731]: I1129 07:22:45.275104 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" event={"ID":"7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9","Type":"ContainerStarted","Data":"63a08b7d9be2944acc8b8cccfa54438fbaf6ed455a6c02fc436f48a25317dcb9"} Nov 29 07:22:45 crc kubenswrapper[4731]: I1129 07:22:45.277155 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" event={"ID":"a6c4aff1-120b-4136-851e-469ebfc6a9ea","Type":"ContainerStarted","Data":"668d54787f7cff630c73dd4ea0b13077a80170dd06c2149ad44f56dea6f1640b"} Nov 29 07:22:45 crc kubenswrapper[4731]: I1129 07:22:45.282870 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" event={"ID":"99aed1ca-e7d9-409c-91fa-439e52342da8","Type":"ContainerStarted","Data":"eef8a66bad517ce6b4df9255b75346ecf68f824e12afe53e1bfc33bdbfad0363"} Nov 29 07:22:58 crc kubenswrapper[4731]: I1129 07:22:58.383287 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" event={"ID":"a35b9e52-221d-4c25-82d9-46fdd8d6e5ea","Type":"ContainerStarted","Data":"f005ebc8b8396bea5ed1f17295085b7f8f3a6d63af7cd922f50f7dfda303eb70"} Nov 29 07:22:58 crc kubenswrapper[4731]: I1129 07:22:58.384506 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" event={"ID":"8ed7bfa3-ce11-490d-80f4-acd9ca51f698","Type":"ContainerStarted","Data":"ec078f9dbdc8be3b04e1368d29e92ca2a6f0cc5c14ad35b8d3f8dca5e778c473"} Nov 29 07:22:58 crc kubenswrapper[4731]: E1129 07:22:58.905923 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 29 07:22:58 crc kubenswrapper[4731]: E1129 07:22:58.906385 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k44fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9wkq4_openstack-operators(c448f643-f2f4-403d-b235-24ac74755cdf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:22:58 crc kubenswrapper[4731]: E1129 07:22:58.907698 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" podUID="c448f643-f2f4-403d-b235-24ac74755cdf" Nov 29 07:22:59 crc kubenswrapper[4731]: I1129 07:22:59.400392 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" event={"ID":"9c9a7893-7770-49ae-8a0f-44168941a55b","Type":"ContainerStarted","Data":"e526b6fb946759fad7c41df6409d0803feb4096f603fc639192d904cf3b3b262"} Nov 29 07:22:59 crc kubenswrapper[4731]: I1129 07:22:59.833439 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4cbd6"] Nov 29 07:23:00 crc kubenswrapper[4731]: I1129 07:23:00.412124 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" event={"ID":"5e2ef3fa-22be-4ac5-9cce-09227be5538b","Type":"ContainerStarted","Data":"9e549481b21d25ec4ecb65a7490734a70fabb0c01352be80a364272bbd1c2781"} Nov 29 07:23:00 crc kubenswrapper[4731]: I1129 07:23:00.414861 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" event={"ID":"eff57485-877d-4e3e-95a2-ffc9c5ac4f0b","Type":"ContainerStarted","Data":"bdf6b379193da4ec1fcd2a1a34c3a7d1e17d2369d9736132684d8981f526d2d3"} Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.653788 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.654100 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxhf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-49fl6_openstack-operators(a9cc0c44-f184-47aa-9f26-78375628a187): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.654626 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\": context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.654848 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wn24j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-jmtwc_openstack-operators(a8ecb76b-3826-4e47-920c-e0d9e3c18e38): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\": context canceled" logger="UnhandledError" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.655408 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" podUID="a9cc0c44-f184-47aa-9f26-78375628a187" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.655992 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \\\"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\\\": context canceled\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" podUID="a8ecb76b-3826-4e47-920c-e0d9e3c18e38" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.679720 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.679948 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2cp5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-gc2xd_openstack-operators(8b165a75-263b-42e0-9521-85bf1a15dcbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.681555 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" podUID="8b165a75-263b-42e0-9521-85bf1a15dcbf" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.704658 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.704880 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lsp6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-vftx4_openstack-operators(9fef0c9a-6dd7-4034-99c0-68409ad7d697): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.706122 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" podUID="9fef0c9a-6dd7-4034-99c0-68409ad7d697" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.934958 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.935333 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2vm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-668d9c48b9-mc6kc_openstack-operators(3d80a1f9-6d6a-41e1-acee-640ffc57a440): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:23:00 crc kubenswrapper[4731]: E1129 07:23:00.936420 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" podUID="3d80a1f9-6d6a-41e1-acee-640ffc57a440" Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.471659 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" event={"ID":"573641b3-8529-4a47-a0f6-379f2838dc27","Type":"ContainerStarted","Data":"6dc3cb6eae6e7e9b255a0947549e324a6fbcbfe0a7ea5da694cceceb45f8dbb9"} Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.480187 4731 generic.go:334] "Generic (PLEG): container finished" podID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerID="12b0f61af7c4c4d7c4e33af6b4bb0595cf08bb15ab29248d7f518d674ee53fba" exitCode=0 Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.480363 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cbd6" event={"ID":"bf025757-ec7f-4848-9d2f-376f774e2a83","Type":"ContainerDied","Data":"12b0f61af7c4c4d7c4e33af6b4bb0595cf08bb15ab29248d7f518d674ee53fba"} Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.480411 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cbd6" event={"ID":"bf025757-ec7f-4848-9d2f-376f774e2a83","Type":"ContainerStarted","Data":"2a8bca2b025fb02e00cb83891da2455d7408379c12cb40d2ccf667d5a69127aa"} Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.495908 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" event={"ID":"fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9","Type":"ContainerStarted","Data":"c7bc842b8879b5a67635c0ab37d8b561a4274a005f73cbbe91a60dc5237cc3cf"} Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.496179 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.524041 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" event={"ID":"cf83d1d1-1d33-4905-ae31-038a7afbd230","Type":"ContainerStarted","Data":"70d24446cd724ebd0cda27aa289f83b86c51c0a83207acb6dc75ed60285154c7"} Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.560229 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" event={"ID":"f7c082ea-a878-4069-9c48-96d4210f909a","Type":"ContainerStarted","Data":"f4f78a3e9e89a73567d2d7e962a11ccd4d783b72f5cd7432a19dc14b3585d152"} Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.562270 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" podStartSLOduration=41.562244683 podStartE2EDuration="41.562244683s" podCreationTimestamp="2025-11-29 07:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:23:01.560927355 +0000 UTC m=+1020.451288448" watchObservedRunningTime="2025-11-29 07:23:01.562244683 +0000 UTC m=+1020.452605786" Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.579809 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" event={"ID":"9e9c951b-fd4b-408d-a01c-0288201c0227","Type":"ContainerStarted","Data":"8e1a738e2c44ca5577b7607ae8fb82dd4d5afabf5d7ad942ad12d3e954886348"} Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.581075 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.605681 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.606913 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" event={"ID":"77a02080-6b69-441d-a6a3-ac95c4c697fe","Type":"ContainerStarted","Data":"d7123620ab0004852ce5abbd2be8686553903aa3790ceaf4638932f9feebbccf"} Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.616296 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdjtg" podStartSLOduration=2.49597903 podStartE2EDuration="42.616275032s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:20.86634021 +0000 UTC m=+979.756701313" lastFinishedPulling="2025-11-29 07:23:00.986636222 +0000 UTC m=+1019.876997315" observedRunningTime="2025-11-29 07:23:01.615083547 +0000 UTC m=+1020.505444650" watchObservedRunningTime="2025-11-29 07:23:01.616275032 +0000 UTC m=+1020.506636135" Nov 29 07:23:01 crc kubenswrapper[4731]: I1129 07:23:01.638178 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" event={"ID":"530e0034-afa1-42a5-ae59-1f8eeb34aef0","Type":"ContainerStarted","Data":"a4ce32ca9bd5e569580cbe756401f33a7a377185a2b7b33931ec9471010c18bc"} Nov 29 07:23:02 crc kubenswrapper[4731]: I1129 07:23:02.646896 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" event={"ID":"9c9a7893-7770-49ae-8a0f-44168941a55b","Type":"ContainerStarted","Data":"8b6c5ddda9835375653a0dd9488784a601bc52384a19412ee3679f74bdb976b6"} Nov 29 07:23:02 crc kubenswrapper[4731]: I1129 07:23:02.648827 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" event={"ID":"f7c082ea-a878-4069-9c48-96d4210f909a","Type":"ContainerStarted","Data":"8f95e2ac3b5f9a5b1140a48dfb5f54d01420db0c1c3658937ecb0719e1a8f0dd"} Nov 29 07:23:02 crc kubenswrapper[4731]: I1129 07:23:02.650781 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" event={"ID":"9d6f5aa5-06c2-4196-a217-68aa690b6e7f","Type":"ContainerStarted","Data":"f5fafd85104ec16e48c2b76624953fc4463715004a2bd1315bf981055803f8f3"} Nov 29 07:23:02 crc kubenswrapper[4731]: I1129 07:23:02.652590 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" event={"ID":"a6c4aff1-120b-4136-851e-469ebfc6a9ea","Type":"ContainerStarted","Data":"143b4044e4105e77b1dbe678299da0a83515978c21f3f1b5b35f819099cf980a"} Nov 29 07:23:02 crc kubenswrapper[4731]: I1129 07:23:02.654508 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" event={"ID":"eff57485-877d-4e3e-95a2-ffc9c5ac4f0b","Type":"ContainerStarted","Data":"b652aa0971785e66a6386789cc0c7d3ce45676beb888a111fd347166659ff75d"} Nov 29 07:23:02 crc kubenswrapper[4731]: I1129 07:23:02.656344 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" event={"ID":"573641b3-8529-4a47-a0f6-379f2838dc27","Type":"ContainerStarted","Data":"0b924907e16ec9e343249ff800e770a225cf8d66871ad667d914c344bf3ee36e"} Nov 29 07:23:02 crc kubenswrapper[4731]: I1129 07:23:02.657911 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" event={"ID":"a35b9e52-221d-4c25-82d9-46fdd8d6e5ea","Type":"ContainerStarted","Data":"5210bd542ac91f53171fb2b5f8f8e4a41b4db79197d1553d82eef0582bf89e00"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.671354 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" event={"ID":"9292e72f-2b6c-4a88-9a75-e8f55cda383a","Type":"ContainerStarted","Data":"51a58c55f4f6c4fd6e1bde1140e35a74ed60dd688ec553f67cf461dd6103a06c"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.672019 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.674451 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" event={"ID":"530e0034-afa1-42a5-ae59-1f8eeb34aef0","Type":"ContainerStarted","Data":"386a28729df35d06b21e58bd10d1ba3f88399a2327ac0abfd0ea26fc271eb95c"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.675444 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.678655 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.687931 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" event={"ID":"cf83d1d1-1d33-4905-ae31-038a7afbd230","Type":"ContainerStarted","Data":"b14173cb76a691c4794622b2c787d8ef8586acfbf90affc3e912381943f1dfcc"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.688738 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.702001 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" event={"ID":"77a02080-6b69-441d-a6a3-ac95c4c697fe","Type":"ContainerStarted","Data":"3f6fb20ad3bc02d310ba260b2c3c4f9a6114e155466b420c31eaa8af18c91ae4"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.703245 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.719696 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" event={"ID":"7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9","Type":"ContainerStarted","Data":"5331465127a35864e87fad9cb0c4777f7d224919494364c3c575ff83a1153332"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.721097 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.730470 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.732856 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-97dng" podStartSLOduration=4.406162568 podStartE2EDuration="44.732828878s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:20.794964085 +0000 UTC m=+979.685325188" lastFinishedPulling="2025-11-29 07:23:01.121630395 +0000 UTC m=+1020.011991498" observedRunningTime="2025-11-29 07:23:03.726496643 +0000 UTC m=+1022.616857746" watchObservedRunningTime="2025-11-29 07:23:03.732828878 +0000 UTC m=+1022.623189971" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.735689 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" event={"ID":"99aed1ca-e7d9-409c-91fa-439e52342da8","Type":"ContainerStarted","Data":"3c43a6023eaddf4b785a36a701d5d91967fe58e311adfbc95da10b482f7de46f"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.736295 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.740287 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.744317 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" event={"ID":"5e2ef3fa-22be-4ac5-9cce-09227be5538b","Type":"ContainerStarted","Data":"5f76c09135cb651992d5849a85d77b84214b77d665f37f45c8e2ef3245085a34"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.744599 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.769999 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" event={"ID":"3d80a1f9-6d6a-41e1-acee-640ffc57a440","Type":"ContainerStarted","Data":"889640c6149370442b6b74e2363fd06330e843618e02c5139d708287483f044a"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.801168 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" event={"ID":"8ed7bfa3-ce11-490d-80f4-acd9ca51f698","Type":"ContainerStarted","Data":"8eee135cd570b0964aa4a569678831a2ecfb228262a132e399fb5560d4379296"} Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.801274 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.802425 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.802685 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.802769 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.803257 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" podStartSLOduration=7.461575347 podStartE2EDuration="44.803236595s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.827279525 +0000 UTC m=+980.717640628" lastFinishedPulling="2025-11-29 07:22:59.168940773 +0000 UTC m=+1018.059301876" observedRunningTime="2025-11-29 07:23:03.766253044 +0000 UTC m=+1022.656614147" watchObservedRunningTime="2025-11-29 07:23:03.803236595 +0000 UTC m=+1022.693597708" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.805317 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.844258 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hdhqj" podStartSLOduration=5.137631013 podStartE2EDuration="44.844231362s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.346432992 +0000 UTC m=+980.236794095" lastFinishedPulling="2025-11-29 07:23:01.053033341 +0000 UTC m=+1019.943394444" observedRunningTime="2025-11-29 07:23:03.839344649 +0000 UTC m=+1022.729705752" watchObservedRunningTime="2025-11-29 07:23:03.844231362 +0000 UTC m=+1022.734592465" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.914123 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" podStartSLOduration=23.220326633 podStartE2EDuration="44.914098252s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:37.593050267 +0000 UTC m=+996.483411380" lastFinishedPulling="2025-11-29 07:22:59.286821896 +0000 UTC m=+1018.177182999" observedRunningTime="2025-11-29 07:23:03.88970358 +0000 UTC m=+1022.780064683" watchObservedRunningTime="2025-11-29 07:23:03.914098252 +0000 UTC m=+1022.804459355" Nov 29 07:23:03 crc kubenswrapper[4731]: I1129 07:23:03.939098 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" podStartSLOduration=23.35235975 podStartE2EDuration="44.939072982s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:37.582216321 +0000 UTC m=+996.472577424" lastFinishedPulling="2025-11-29 07:22:59.168929553 +0000 UTC m=+1018.059290656" observedRunningTime="2025-11-29 07:23:03.932613473 +0000 UTC m=+1022.822974576" watchObservedRunningTime="2025-11-29 07:23:03.939072982 +0000 UTC m=+1022.829434085" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.005007 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6mxrn" podStartSLOduration=5.4905077890000005 podStartE2EDuration="45.004980137s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.533703371 +0000 UTC m=+980.424064474" lastFinishedPulling="2025-11-29 07:23:01.048175719 +0000 UTC m=+1019.938536822" observedRunningTime="2025-11-29 07:23:03.955231264 +0000 UTC m=+1022.845592387" watchObservedRunningTime="2025-11-29 07:23:04.004980137 +0000 UTC m=+1022.895341240" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.006502 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" podStartSLOduration=5.2648456679999995 podStartE2EDuration="45.006495201s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.337209502 +0000 UTC m=+980.227570605" lastFinishedPulling="2025-11-29 07:23:01.078859035 +0000 UTC m=+1019.969220138" observedRunningTime="2025-11-29 07:23:03.994067518 +0000 UTC m=+1022.884428641" watchObservedRunningTime="2025-11-29 07:23:04.006495201 +0000 UTC m=+1022.896856304" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.048427 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" podStartSLOduration=23.917452765 podStartE2EDuration="45.048403285s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.680060976 +0000 UTC m=+980.570422079" lastFinishedPulling="2025-11-29 07:22:42.811011496 +0000 UTC m=+1001.701372599" observedRunningTime="2025-11-29 07:23:04.043811031 +0000 UTC m=+1022.934172134" watchObservedRunningTime="2025-11-29 07:23:04.048403285 +0000 UTC m=+1022.938764388" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.111580 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-dlncb" podStartSLOduration=5.50287609 podStartE2EDuration="45.111536169s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.354678372 +0000 UTC m=+980.245039475" lastFinishedPulling="2025-11-29 07:23:00.963338451 +0000 UTC m=+1019.853699554" observedRunningTime="2025-11-29 07:23:04.10676645 +0000 UTC m=+1022.997127553" watchObservedRunningTime="2025-11-29 07:23:04.111536169 +0000 UTC m=+1023.001897262" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.189472 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" podStartSLOduration=5.9257725610000005 podStartE2EDuration="45.189434104s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.809042093 +0000 UTC m=+980.699403196" lastFinishedPulling="2025-11-29 07:23:01.072703636 +0000 UTC m=+1019.963064739" observedRunningTime="2025-11-29 07:23:04.16054782 +0000 UTC m=+1023.050908933" watchObservedRunningTime="2025-11-29 07:23:04.189434104 +0000 UTC m=+1023.079795207" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.241289 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" podStartSLOduration=6.083948811 podStartE2EDuration="45.241262778s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.826923715 +0000 UTC m=+980.717284818" lastFinishedPulling="2025-11-29 07:23:00.984237682 +0000 UTC m=+1019.874598785" observedRunningTime="2025-11-29 07:23:04.232598195 +0000 UTC m=+1023.122959298" watchObservedRunningTime="2025-11-29 07:23:04.241262778 +0000 UTC m=+1023.131623881" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.257670 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" podStartSLOduration=5.586325217 podStartE2EDuration="45.257642986s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.37718388 +0000 UTC m=+980.267544983" lastFinishedPulling="2025-11-29 07:23:01.048501649 +0000 UTC m=+1019.938862752" observedRunningTime="2025-11-29 07:23:04.256970927 +0000 UTC m=+1023.147332050" watchObservedRunningTime="2025-11-29 07:23:04.257642986 +0000 UTC m=+1023.148004079" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.301174 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" podStartSLOduration=5.946764224 podStartE2EDuration="45.301148407s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.649816162 +0000 UTC m=+980.540177265" lastFinishedPulling="2025-11-29 07:23:01.004200345 +0000 UTC m=+1019.894561448" observedRunningTime="2025-11-29 07:23:04.291075433 +0000 UTC m=+1023.181436536" watchObservedRunningTime="2025-11-29 07:23:04.301148407 +0000 UTC m=+1023.191509500" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.386997 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" podStartSLOduration=7.897279753 podStartE2EDuration="45.386975834s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.67919135 +0000 UTC m=+980.569552453" lastFinishedPulling="2025-11-29 07:22:59.168887431 +0000 UTC m=+1018.059248534" observedRunningTime="2025-11-29 07:23:04.385302675 +0000 UTC m=+1023.275663778" watchObservedRunningTime="2025-11-29 07:23:04.386975834 +0000 UTC m=+1023.277336937" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.396478 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" podStartSLOduration=5.894392635 podStartE2EDuration="45.396461011s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.500832361 +0000 UTC m=+980.391193464" lastFinishedPulling="2025-11-29 07:23:01.002900737 +0000 UTC m=+1019.893261840" observedRunningTime="2025-11-29 07:23:04.361474569 +0000 UTC m=+1023.251835672" watchObservedRunningTime="2025-11-29 07:23:04.396461011 +0000 UTC m=+1023.286822114" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.808980 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" event={"ID":"9fef0c9a-6dd7-4034-99c0-68409ad7d697","Type":"ContainerStarted","Data":"28f0089b3bc1009f88b84c879467bd78d1941e7e250ec59ee4b33800cb59b4c9"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.809045 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" event={"ID":"9fef0c9a-6dd7-4034-99c0-68409ad7d697","Type":"ContainerStarted","Data":"eee15f0beb379417d73b0b84d2a48af0a79095be71f4f9ef3c2fa51215defe28"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.809834 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.812110 4731 generic.go:334] "Generic (PLEG): container finished" podID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerID="24db485a725468c3bac90c03f076f3a3027f6a044aa4bd4bb81d207ef172c531" exitCode=0 Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.812175 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cbd6" event={"ID":"bf025757-ec7f-4848-9d2f-376f774e2a83","Type":"ContainerDied","Data":"24db485a725468c3bac90c03f076f3a3027f6a044aa4bd4bb81d207ef172c531"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.814912 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" event={"ID":"a8ecb76b-3826-4e47-920c-e0d9e3c18e38","Type":"ContainerStarted","Data":"e26daaf84137d1541d2f2bd9d412d58e131e0d60e893d6338dc8694269b1b0fe"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.814940 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" event={"ID":"a8ecb76b-3826-4e47-920c-e0d9e3c18e38","Type":"ContainerStarted","Data":"113f7ce8bfe52cb1ea50bd8d4f0767471a943eef87dec2624ff97394f1b96654"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.815364 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.817727 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" event={"ID":"3d80a1f9-6d6a-41e1-acee-640ffc57a440","Type":"ContainerStarted","Data":"0c4bbadc6c8b1ea0a211774a3db0638e38969947bd8d0af35a5bdfe2224202f8"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.817905 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.821361 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" event={"ID":"8b165a75-263b-42e0-9521-85bf1a15dcbf","Type":"ContainerStarted","Data":"dad8d554f3f282a60be58b54b25051d2519710fc39bb6101e3b7751fe0e20cfc"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.821397 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" event={"ID":"8b165a75-263b-42e0-9521-85bf1a15dcbf","Type":"ContainerStarted","Data":"053dfcab3b9eb0dca76c6cee455fd68135dd5d4c36d7d59c1969a4eeacc6d304"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.821612 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.824050 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" event={"ID":"a9cc0c44-f184-47aa-9f26-78375628a187","Type":"ContainerStarted","Data":"78f9e200b6c0ab63ac7954b1a02d807e5bd2554f203bb6ac0ab7c932b9e0bb0b"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.825403 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.825443 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" event={"ID":"a9cc0c44-f184-47aa-9f26-78375628a187","Type":"ContainerStarted","Data":"e286d94284ff1fbf3310f9d74c42fd5514f0cfc0b3516358de2da7912c25d116"} Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.828421 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fnkt5" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.828817 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k89xw" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.830275 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-kc7tq" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.834687 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" podStartSLOduration=3.59712085 podStartE2EDuration="45.834662039s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.009167511 +0000 UTC m=+979.899528604" lastFinishedPulling="2025-11-29 07:23:03.24670869 +0000 UTC m=+1022.137069793" observedRunningTime="2025-11-29 07:23:04.833074143 +0000 UTC m=+1023.723435266" watchObservedRunningTime="2025-11-29 07:23:04.834662039 +0000 UTC m=+1023.725023142" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.883985 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" podStartSLOduration=3.76589967 podStartE2EDuration="45.883955249s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.13512326 +0000 UTC m=+980.025484363" lastFinishedPulling="2025-11-29 07:23:03.253178839 +0000 UTC m=+1022.143539942" observedRunningTime="2025-11-29 07:23:04.880324133 +0000 UTC m=+1023.770685246" watchObservedRunningTime="2025-11-29 07:23:04.883955249 +0000 UTC m=+1023.774316352" Nov 29 07:23:04 crc kubenswrapper[4731]: I1129 07:23:04.986729 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" podStartSLOduration=3.539641772 podStartE2EDuration="45.98669864s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.121713688 +0000 UTC m=+980.012074791" lastFinishedPulling="2025-11-29 07:23:03.568770556 +0000 UTC m=+1022.459131659" observedRunningTime="2025-11-29 07:23:04.984058682 +0000 UTC m=+1023.874419785" watchObservedRunningTime="2025-11-29 07:23:04.98669864 +0000 UTC m=+1023.877059743" Nov 29 07:23:05 crc kubenswrapper[4731]: I1129 07:23:05.038812 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" podStartSLOduration=3.806687601 podStartE2EDuration="46.038784711s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:20.99715748 +0000 UTC m=+979.887518583" lastFinishedPulling="2025-11-29 07:23:03.22925459 +0000 UTC m=+1022.119615693" observedRunningTime="2025-11-29 07:23:05.015962144 +0000 UTC m=+1023.906323267" watchObservedRunningTime="2025-11-29 07:23:05.038784711 +0000 UTC m=+1023.929145834" Nov 29 07:23:05 crc kubenswrapper[4731]: I1129 07:23:05.043576 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" podStartSLOduration=4.277732969 podStartE2EDuration="46.04353238s" podCreationTimestamp="2025-11-29 07:22:19 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.485896595 +0000 UTC m=+980.376257698" lastFinishedPulling="2025-11-29 07:23:03.251696006 +0000 UTC m=+1022.142057109" observedRunningTime="2025-11-29 07:23:05.037126722 +0000 UTC m=+1023.927487835" watchObservedRunningTime="2025-11-29 07:23:05.04353238 +0000 UTC m=+1023.933893483" Nov 29 07:23:05 crc kubenswrapper[4731]: I1129 07:23:05.670341 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-xs92w" Nov 29 07:23:05 crc kubenswrapper[4731]: I1129 07:23:05.832959 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cbd6" event={"ID":"bf025757-ec7f-4848-9d2f-376f774e2a83","Type":"ContainerStarted","Data":"7bbe1fca2dc5e6fbeda9e799789bdf1cc91caf8a919fc9983b3d89d1bb5ff8cc"} Nov 29 07:23:05 crc kubenswrapper[4731]: I1129 07:23:05.836552 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" Nov 29 07:23:05 crc kubenswrapper[4731]: I1129 07:23:05.843358 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl" Nov 29 07:23:05 crc kubenswrapper[4731]: I1129 07:23:05.862748 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4cbd6" podStartSLOduration=18.116598894 podStartE2EDuration="21.862725885s" podCreationTimestamp="2025-11-29 07:22:44 +0000 UTC" firstStartedPulling="2025-11-29 07:23:01.487207212 +0000 UTC m=+1020.377568315" lastFinishedPulling="2025-11-29 07:23:05.233334203 +0000 UTC m=+1024.123695306" observedRunningTime="2025-11-29 07:23:05.858658276 +0000 UTC m=+1024.749019399" watchObservedRunningTime="2025-11-29 07:23:05.862725885 +0000 UTC m=+1024.753086988" Nov 29 07:23:06 crc kubenswrapper[4731]: I1129 07:23:06.640981 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-76c96f5dc5-hsjk8" Nov 29 07:23:09 crc kubenswrapper[4731]: I1129 07:23:09.604197 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-mc6kc" Nov 29 07:23:09 crc kubenswrapper[4731]: I1129 07:23:09.656118 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-vftx4" Nov 29 07:23:09 crc kubenswrapper[4731]: E1129 07:23:09.811628 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" podUID="c448f643-f2f4-403d-b235-24ac74755cdf" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.044870 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-49fl6" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.053218 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-gc2xd" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.293872 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xkgrl" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.342290 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.344637 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-kmj72" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.375654 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jmtwc" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.476354 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhx77" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.574309 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.578351 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-zvvpp" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.611946 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-c98454947-cq6kc" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.742956 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" Nov 29 07:23:10 crc kubenswrapper[4731]: I1129 07:23:10.750423 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-tbxj8" Nov 29 07:23:15 crc kubenswrapper[4731]: I1129 07:23:15.086471 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:23:15 crc kubenswrapper[4731]: I1129 07:23:15.087080 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:23:15 crc kubenswrapper[4731]: I1129 07:23:15.134734 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:23:15 crc kubenswrapper[4731]: I1129 07:23:15.987136 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:23:16 crc kubenswrapper[4731]: I1129 07:23:16.050109 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4cbd6"] Nov 29 07:23:17 crc kubenswrapper[4731]: I1129 07:23:17.948743 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4cbd6" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerName="registry-server" containerID="cri-o://7bbe1fca2dc5e6fbeda9e799789bdf1cc91caf8a919fc9983b3d89d1bb5ff8cc" gracePeriod=2 Nov 29 07:23:21 crc kubenswrapper[4731]: I1129 07:23:21.984988 4731 generic.go:334] "Generic (PLEG): container finished" podID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerID="7bbe1fca2dc5e6fbeda9e799789bdf1cc91caf8a919fc9983b3d89d1bb5ff8cc" exitCode=0 Nov 29 07:23:21 crc kubenswrapper[4731]: I1129 07:23:21.985321 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cbd6" event={"ID":"bf025757-ec7f-4848-9d2f-376f774e2a83","Type":"ContainerDied","Data":"7bbe1fca2dc5e6fbeda9e799789bdf1cc91caf8a919fc9983b3d89d1bb5ff8cc"} Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.285636 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.313625 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-catalog-content\") pod \"bf025757-ec7f-4848-9d2f-376f774e2a83\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.313888 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkpms\" (UniqueName: \"kubernetes.io/projected/bf025757-ec7f-4848-9d2f-376f774e2a83-kube-api-access-kkpms\") pod \"bf025757-ec7f-4848-9d2f-376f774e2a83\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.314001 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-utilities\") pod \"bf025757-ec7f-4848-9d2f-376f774e2a83\" (UID: \"bf025757-ec7f-4848-9d2f-376f774e2a83\") " Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.315929 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-utilities" (OuterVolumeSpecName: "utilities") pod "bf025757-ec7f-4848-9d2f-376f774e2a83" (UID: "bf025757-ec7f-4848-9d2f-376f774e2a83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.323589 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf025757-ec7f-4848-9d2f-376f774e2a83-kube-api-access-kkpms" (OuterVolumeSpecName: "kube-api-access-kkpms") pod "bf025757-ec7f-4848-9d2f-376f774e2a83" (UID: "bf025757-ec7f-4848-9d2f-376f774e2a83"). InnerVolumeSpecName "kube-api-access-kkpms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.373661 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf025757-ec7f-4848-9d2f-376f774e2a83" (UID: "bf025757-ec7f-4848-9d2f-376f774e2a83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.415731 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.415772 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkpms\" (UniqueName: \"kubernetes.io/projected/bf025757-ec7f-4848-9d2f-376f774e2a83-kube-api-access-kkpms\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.415789 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf025757-ec7f-4848-9d2f-376f774e2a83-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.997147 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cbd6" event={"ID":"bf025757-ec7f-4848-9d2f-376f774e2a83","Type":"ContainerDied","Data":"2a8bca2b025fb02e00cb83891da2455d7408379c12cb40d2ccf667d5a69127aa"} Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.997230 4731 scope.go:117] "RemoveContainer" containerID="7bbe1fca2dc5e6fbeda9e799789bdf1cc91caf8a919fc9983b3d89d1bb5ff8cc" Nov 29 07:23:22 crc kubenswrapper[4731]: I1129 07:23:22.997344 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cbd6" Nov 29 07:23:23 crc kubenswrapper[4731]: I1129 07:23:23.015342 4731 scope.go:117] "RemoveContainer" containerID="24db485a725468c3bac90c03f076f3a3027f6a044aa4bd4bb81d207ef172c531" Nov 29 07:23:23 crc kubenswrapper[4731]: I1129 07:23:23.042856 4731 scope.go:117] "RemoveContainer" containerID="12b0f61af7c4c4d7c4e33af6b4bb0595cf08bb15ab29248d7f518d674ee53fba" Nov 29 07:23:23 crc kubenswrapper[4731]: I1129 07:23:23.050556 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4cbd6"] Nov 29 07:23:23 crc kubenswrapper[4731]: I1129 07:23:23.056546 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4cbd6"] Nov 29 07:23:23 crc kubenswrapper[4731]: I1129 07:23:23.824743 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" path="/var/lib/kubelet/pods/bf025757-ec7f-4848-9d2f-376f774e2a83/volumes" Nov 29 07:23:25 crc kubenswrapper[4731]: I1129 07:23:25.014794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" event={"ID":"c448f643-f2f4-403d-b235-24ac74755cdf","Type":"ContainerStarted","Data":"1862be8acdcd150bccb168b6eeae360a68b49ea4d29006d132618879624053f2"} Nov 29 07:23:25 crc kubenswrapper[4731]: I1129 07:23:25.039782 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9wkq4" podStartSLOduration=2.354159205 podStartE2EDuration="1m5.039748738s" podCreationTimestamp="2025-11-29 07:22:20 +0000 UTC" firstStartedPulling="2025-11-29 07:22:21.72881067 +0000 UTC m=+980.619171773" lastFinishedPulling="2025-11-29 07:23:24.414400193 +0000 UTC m=+1043.304761306" observedRunningTime="2025-11-29 07:23:25.035752471 +0000 UTC m=+1043.926113594" watchObservedRunningTime="2025-11-29 07:23:25.039748738 +0000 UTC m=+1043.930109831" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.625295 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vm8q"] Nov 29 07:23:39 crc kubenswrapper[4731]: E1129 07:23:39.626382 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerName="extract-content" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.626401 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerName="extract-content" Nov 29 07:23:39 crc kubenswrapper[4731]: E1129 07:23:39.626422 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerName="registry-server" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.626433 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerName="registry-server" Nov 29 07:23:39 crc kubenswrapper[4731]: E1129 07:23:39.626473 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerName="extract-utilities" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.626483 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerName="extract-utilities" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.627009 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf025757-ec7f-4848-9d2f-376f774e2a83" containerName="registry-server" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.627947 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.632427 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.632505 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.633820 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.639400 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kfp5m" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.658872 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vm8q"] Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.728139 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rqlhd"] Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.732067 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rqlhd"] Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.732174 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.736305 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.811835 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e788ebde-ec76-4ed3-9179-7cec4adbc16b-config\") pod \"dnsmasq-dns-675f4bcbfc-7vm8q\" (UID: \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.812048 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfj4n\" (UniqueName: \"kubernetes.io/projected/e788ebde-ec76-4ed3-9179-7cec4adbc16b-kube-api-access-lfj4n\") pod \"dnsmasq-dns-675f4bcbfc-7vm8q\" (UID: \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.913855 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.914175 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78bhg\" (UniqueName: \"kubernetes.io/projected/fed10363-f472-4b80-a0f8-ac29acd4c4ae-kube-api-access-78bhg\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.914274 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e788ebde-ec76-4ed3-9179-7cec4adbc16b-config\") pod \"dnsmasq-dns-675f4bcbfc-7vm8q\" (UID: \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.915149 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfj4n\" (UniqueName: \"kubernetes.io/projected/e788ebde-ec76-4ed3-9179-7cec4adbc16b-kube-api-access-lfj4n\") pod \"dnsmasq-dns-675f4bcbfc-7vm8q\" (UID: \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.915246 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-config\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.915691 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e788ebde-ec76-4ed3-9179-7cec4adbc16b-config\") pod \"dnsmasq-dns-675f4bcbfc-7vm8q\" (UID: \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.951393 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfj4n\" (UniqueName: \"kubernetes.io/projected/e788ebde-ec76-4ed3-9179-7cec4adbc16b-kube-api-access-lfj4n\") pod \"dnsmasq-dns-675f4bcbfc-7vm8q\" (UID: \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:23:39 crc kubenswrapper[4731]: I1129 07:23:39.978554 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.019270 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-config\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.019659 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.020041 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78bhg\" (UniqueName: \"kubernetes.io/projected/fed10363-f472-4b80-a0f8-ac29acd4c4ae-kube-api-access-78bhg\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.020492 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-config\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.021034 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.053644 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78bhg\" (UniqueName: \"kubernetes.io/projected/fed10363-f472-4b80-a0f8-ac29acd4c4ae-kube-api-access-78bhg\") pod \"dnsmasq-dns-78dd6ddcc-rqlhd\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.057418 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.463802 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vm8q"] Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.473238 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:23:40 crc kubenswrapper[4731]: I1129 07:23:40.553848 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rqlhd"] Nov 29 07:23:40 crc kubenswrapper[4731]: W1129 07:23:40.555602 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfed10363_f472_4b80_a0f8_ac29acd4c4ae.slice/crio-b509f6339ad586db5cadc970d4f6996be70c9bbeb3f3ca8303aa82882f5b849f WatchSource:0}: Error finding container b509f6339ad586db5cadc970d4f6996be70c9bbeb3f3ca8303aa82882f5b849f: Status 404 returned error can't find the container with id b509f6339ad586db5cadc970d4f6996be70c9bbeb3f3ca8303aa82882f5b849f Nov 29 07:23:41 crc kubenswrapper[4731]: I1129 07:23:41.170173 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" event={"ID":"fed10363-f472-4b80-a0f8-ac29acd4c4ae","Type":"ContainerStarted","Data":"b509f6339ad586db5cadc970d4f6996be70c9bbeb3f3ca8303aa82882f5b849f"} Nov 29 07:23:41 crc kubenswrapper[4731]: I1129 07:23:41.171628 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" event={"ID":"e788ebde-ec76-4ed3-9179-7cec4adbc16b","Type":"ContainerStarted","Data":"d2304262c110dc1483ad096a4c635545e800d88862b9ca487acaac29eaacd4e4"} Nov 29 07:23:42 crc kubenswrapper[4731]: I1129 07:23:42.841913 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vm8q"] Nov 29 07:23:42 crc kubenswrapper[4731]: I1129 07:23:42.885398 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rp9mj"] Nov 29 07:23:42 crc kubenswrapper[4731]: I1129 07:23:42.887105 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:42 crc kubenswrapper[4731]: I1129 07:23:42.906544 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rp9mj"] Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.088660 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.089257 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tscrb\" (UniqueName: \"kubernetes.io/projected/92c6293a-e74b-4e0b-8384-39dd11b2057c-kube-api-access-tscrb\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.089302 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-config\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.165502 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rqlhd"] Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.186905 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zwzm"] Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.188274 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.191730 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.191864 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tscrb\" (UniqueName: \"kubernetes.io/projected/92c6293a-e74b-4e0b-8384-39dd11b2057c-kube-api-access-tscrb\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.191901 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-config\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.193238 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.194896 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-config\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.203055 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zwzm"] Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.221068 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tscrb\" (UniqueName: \"kubernetes.io/projected/92c6293a-e74b-4e0b-8384-39dd11b2057c-kube-api-access-tscrb\") pod \"dnsmasq-dns-666b6646f7-rp9mj\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.225169 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.294356 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-config\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.294493 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.294514 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfgxr\" (UniqueName: \"kubernetes.io/projected/b658f01f-5c13-4bfd-932f-496533c6cec4-kube-api-access-cfgxr\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.398330 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-config\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.398426 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.398456 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfgxr\" (UniqueName: \"kubernetes.io/projected/b658f01f-5c13-4bfd-932f-496533c6cec4-kube-api-access-cfgxr\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.400477 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-config\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.401415 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.438228 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfgxr\" (UniqueName: \"kubernetes.io/projected/b658f01f-5c13-4bfd-932f-496533c6cec4-kube-api-access-cfgxr\") pod \"dnsmasq-dns-57d769cc4f-8zwzm\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.560044 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:23:43 crc kubenswrapper[4731]: I1129 07:23:43.801886 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rp9mj"] Nov 29 07:23:43 crc kubenswrapper[4731]: W1129 07:23:43.811910 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92c6293a_e74b_4e0b_8384_39dd11b2057c.slice/crio-86bc619eefd9cd6f4482f4b822d01f0528a95df86575971054b78870fe142731 WatchSource:0}: Error finding container 86bc619eefd9cd6f4482f4b822d01f0528a95df86575971054b78870fe142731: Status 404 returned error can't find the container with id 86bc619eefd9cd6f4482f4b822d01f0528a95df86575971054b78870fe142731 Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.032687 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.034846 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.038997 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.039666 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.039684 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.039844 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.039868 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.039950 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k5dcg" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.040152 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.042133 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.122013 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zwzm"] Nov 29 07:23:44 crc kubenswrapper[4731]: W1129 07:23:44.136188 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb658f01f_5c13_4bfd_932f_496533c6cec4.slice/crio-26ca1bd037acb90509fb3a273dd92c3f256e85f9f860a3fa9e052000d5ba51c2 WatchSource:0}: Error finding container 26ca1bd037acb90509fb3a273dd92c3f256e85f9f860a3fa9e052000d5ba51c2: Status 404 returned error can't find the container with id 26ca1bd037acb90509fb3a273dd92c3f256e85f9f860a3fa9e052000d5ba51c2 Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214129 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214200 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214235 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214259 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d5b5\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-kube-api-access-2d5b5\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214317 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214335 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214353 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214418 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214461 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214499 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.214530 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.252396 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" event={"ID":"b658f01f-5c13-4bfd-932f-496533c6cec4","Type":"ContainerStarted","Data":"26ca1bd037acb90509fb3a273dd92c3f256e85f9f860a3fa9e052000d5ba51c2"} Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.259027 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" event={"ID":"92c6293a-e74b-4e0b-8384-39dd11b2057c","Type":"ContainerStarted","Data":"86bc619eefd9cd6f4482f4b822d01f0528a95df86575971054b78870fe142731"} Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319132 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319194 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d5b5\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-kube-api-access-2d5b5\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319252 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319288 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319309 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319358 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319391 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319430 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319464 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319660 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.319735 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.322141 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.322470 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.323123 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.323440 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.323947 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.324471 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.326789 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.328132 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.328149 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.329470 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.346334 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.347933 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d5b5\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-kube-api-access-2d5b5\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.349082 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.358533 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.358931 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.359240 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.358945 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qxd6z" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.359535 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.359756 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.360054 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.369311 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.383524 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.526701 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.526782 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.526806 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.526823 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.526842 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7971e0f-0e23-4782-9766-4841f04ac1e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.526861 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7971e0f-0e23-4782-9766-4841f04ac1e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.527144 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.527307 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.527421 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.527551 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b9r6\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-kube-api-access-8b9r6\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.527652 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.634839 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b9r6\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-kube-api-access-8b9r6\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.634967 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635002 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635055 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635086 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635106 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635126 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7971e0f-0e23-4782-9766-4841f04ac1e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635152 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7971e0f-0e23-4782-9766-4841f04ac1e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635188 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635247 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635311 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.635865 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.636079 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.636224 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.636860 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.637547 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.640787 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.647092 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.647453 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.649414 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7971e0f-0e23-4782-9766-4841f04ac1e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.673595 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7971e0f-0e23-4782-9766-4841f04ac1e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.673831 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.692252 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.694680 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b9r6\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-kube-api-access-8b9r6\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:44 crc kubenswrapper[4731]: I1129 07:23:44.764123 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.567861 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.570885 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.576517 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.576789 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.577071 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tclvg" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.577514 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.586543 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.593177 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.661001 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.661082 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg69z\" (UniqueName: \"kubernetes.io/projected/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-kube-api-access-fg69z\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.661131 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-kolla-config\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.661164 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.661221 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-config-data-default\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.661321 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.661360 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.661382 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.763395 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-kolla-config\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.763449 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.763474 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-config-data-default\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.763514 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.763533 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.763553 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.763639 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.763669 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg69z\" (UniqueName: \"kubernetes.io/projected/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-kube-api-access-fg69z\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.764951 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-kolla-config\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.765916 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.766926 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.766990 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-config-data-default\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.768083 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.769910 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.772937 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.785547 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg69z\" (UniqueName: \"kubernetes.io/projected/c3b26ece-7b12-4cd4-befd-4d42fa5b55fc-kube-api-access-fg69z\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.820212 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc\") " pod="openstack/openstack-galera-0" Nov 29 07:23:45 crc kubenswrapper[4731]: I1129 07:23:45.928784 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.840123 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.842315 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.847926 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.848661 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fx5s5" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.848815 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.849015 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.885032 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcsh5\" (UniqueName: \"kubernetes.io/projected/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-kube-api-access-lcsh5\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.885658 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.885750 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.886129 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.886383 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.886439 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.886468 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.887255 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.895409 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997009 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997129 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997192 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997226 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997250 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997295 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997323 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcsh5\" (UniqueName: \"kubernetes.io/projected/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-kube-api-access-lcsh5\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997354 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.997941 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.999136 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:46 crc kubenswrapper[4731]: I1129 07:23:46.999731 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.000176 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.002932 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.005336 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.022119 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.044656 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcsh5\" (UniqueName: \"kubernetes.io/projected/ce9f78b3-187a-4988-a15f-fd5b81e07ab4-kube-api-access-lcsh5\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.049427 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce9f78b3-187a-4988-a15f-fd5b81e07ab4\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.146327 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.155993 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.163265 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.195899 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.198376 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-p28j8" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.198817 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.201951 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.212791 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d220afda-dc32-49e3-9cae-b9270f077167-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.212878 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4q5\" (UniqueName: \"kubernetes.io/projected/d220afda-dc32-49e3-9cae-b9270f077167-kube-api-access-2g4q5\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.213914 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d220afda-dc32-49e3-9cae-b9270f077167-config-data\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.213998 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d220afda-dc32-49e3-9cae-b9270f077167-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.214096 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d220afda-dc32-49e3-9cae-b9270f077167-kolla-config\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.315923 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d220afda-dc32-49e3-9cae-b9270f077167-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.315990 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g4q5\" (UniqueName: \"kubernetes.io/projected/d220afda-dc32-49e3-9cae-b9270f077167-kube-api-access-2g4q5\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.316016 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d220afda-dc32-49e3-9cae-b9270f077167-config-data\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.316055 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d220afda-dc32-49e3-9cae-b9270f077167-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.316111 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d220afda-dc32-49e3-9cae-b9270f077167-kolla-config\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.317150 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d220afda-dc32-49e3-9cae-b9270f077167-config-data\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.317225 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d220afda-dc32-49e3-9cae-b9270f077167-kolla-config\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.327416 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d220afda-dc32-49e3-9cae-b9270f077167-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.337093 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d220afda-dc32-49e3-9cae-b9270f077167-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.347884 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g4q5\" (UniqueName: \"kubernetes.io/projected/d220afda-dc32-49e3-9cae-b9270f077167-kube-api-access-2g4q5\") pod \"memcached-0\" (UID: \"d220afda-dc32-49e3-9cae-b9270f077167\") " pod="openstack/memcached-0" Nov 29 07:23:47 crc kubenswrapper[4731]: I1129 07:23:47.511616 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 07:23:48 crc kubenswrapper[4731]: I1129 07:23:48.950076 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:23:48 crc kubenswrapper[4731]: I1129 07:23:48.953948 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:23:48 crc kubenswrapper[4731]: I1129 07:23:48.960660 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-v9pqw" Nov 29 07:23:48 crc kubenswrapper[4731]: I1129 07:23:48.969731 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:23:49 crc kubenswrapper[4731]: I1129 07:23:49.054902 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tr7n\" (UniqueName: \"kubernetes.io/projected/7241db7b-fd6e-431a-b38a-6d3f3404a630-kube-api-access-7tr7n\") pod \"kube-state-metrics-0\" (UID: \"7241db7b-fd6e-431a-b38a-6d3f3404a630\") " pod="openstack/kube-state-metrics-0" Nov 29 07:23:49 crc kubenswrapper[4731]: I1129 07:23:49.156455 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tr7n\" (UniqueName: \"kubernetes.io/projected/7241db7b-fd6e-431a-b38a-6d3f3404a630-kube-api-access-7tr7n\") pod \"kube-state-metrics-0\" (UID: \"7241db7b-fd6e-431a-b38a-6d3f3404a630\") " pod="openstack/kube-state-metrics-0" Nov 29 07:23:49 crc kubenswrapper[4731]: I1129 07:23:49.178693 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tr7n\" (UniqueName: \"kubernetes.io/projected/7241db7b-fd6e-431a-b38a-6d3f3404a630-kube-api-access-7tr7n\") pod \"kube-state-metrics-0\" (UID: \"7241db7b-fd6e-431a-b38a-6d3f3404a630\") " pod="openstack/kube-state-metrics-0" Nov 29 07:23:49 crc kubenswrapper[4731]: I1129 07:23:49.275014 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.571785 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hdf9m"] Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.573068 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.578459 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hw527" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.578790 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.579012 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.586307 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdf9m"] Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.625926 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-run-ovn\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.625993 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e584c0b-7ce0-45b8-b6a9-60ee16752970-ovn-controller-tls-certs\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.626028 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e584c0b-7ce0-45b8-b6a9-60ee16752970-scripts\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.626089 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e584c0b-7ce0-45b8-b6a9-60ee16752970-combined-ca-bundle\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.626112 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-log-ovn\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.626152 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8nsw\" (UniqueName: \"kubernetes.io/projected/3e584c0b-7ce0-45b8-b6a9-60ee16752970-kube-api-access-t8nsw\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.626194 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-run\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.643756 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-slgbx"] Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.645899 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.663769 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-slgbx"] Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.727480 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-run\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.727955 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-scripts\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.727988 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-run-ovn\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.728016 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e584c0b-7ce0-45b8-b6a9-60ee16752970-ovn-controller-tls-certs\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.728041 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e584c0b-7ce0-45b8-b6a9-60ee16752970-scripts\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.728113 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e584c0b-7ce0-45b8-b6a9-60ee16752970-combined-ca-bundle\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.728884 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-run-ovn\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.730336 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e584c0b-7ce0-45b8-b6a9-60ee16752970-scripts\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.730437 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-log-ovn\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.730484 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-log\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.730737 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-log-ovn\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.730797 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-lib\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.730854 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8nsw\" (UniqueName: \"kubernetes.io/projected/3e584c0b-7ce0-45b8-b6a9-60ee16752970-kube-api-access-t8nsw\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.731182 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46lq\" (UniqueName: \"kubernetes.io/projected/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-kube-api-access-z46lq\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.731254 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-etc-ovs\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.731291 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-run\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.731429 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e584c0b-7ce0-45b8-b6a9-60ee16752970-var-run\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.736752 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e584c0b-7ce0-45b8-b6a9-60ee16752970-combined-ca-bundle\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.736793 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e584c0b-7ce0-45b8-b6a9-60ee16752970-ovn-controller-tls-certs\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.754768 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8nsw\" (UniqueName: \"kubernetes.io/projected/3e584c0b-7ce0-45b8-b6a9-60ee16752970-kube-api-access-t8nsw\") pod \"ovn-controller-hdf9m\" (UID: \"3e584c0b-7ce0-45b8-b6a9-60ee16752970\") " pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.834983 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-run\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835123 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-scripts\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835254 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-run\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835313 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-log\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835362 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-lib\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835409 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z46lq\" (UniqueName: \"kubernetes.io/projected/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-kube-api-access-z46lq\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835443 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-etc-ovs\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835587 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-log\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835839 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-var-lib\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.835940 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-etc-ovs\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.838224 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-scripts\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.856766 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z46lq\" (UniqueName: \"kubernetes.io/projected/a63e01ef-be39-42d9-83e2-a4750d6eb8ba-kube-api-access-z46lq\") pod \"ovn-controller-ovs-slgbx\" (UID: \"a63e01ef-be39-42d9-83e2-a4750d6eb8ba\") " pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.941531 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m" Nov 29 07:23:52 crc kubenswrapper[4731]: I1129 07:23:52.965524 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.016048 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.017941 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.022022 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.022282 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.022345 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-sn9cq" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.022733 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.024346 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.034803 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.141544 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/380dda58-7342-44d2-a0e1-b4ac78363de8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.141903 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.141984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.142032 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/380dda58-7342-44d2-a0e1-b4ac78363de8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.142148 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.142181 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380dda58-7342-44d2-a0e1-b4ac78363de8-config\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.142205 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.142237 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82mgp\" (UniqueName: \"kubernetes.io/projected/380dda58-7342-44d2-a0e1-b4ac78363de8-kube-api-access-82mgp\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.243869 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.243924 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380dda58-7342-44d2-a0e1-b4ac78363de8-config\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.243945 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.243981 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82mgp\" (UniqueName: \"kubernetes.io/projected/380dda58-7342-44d2-a0e1-b4ac78363de8-kube-api-access-82mgp\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.244002 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/380dda58-7342-44d2-a0e1-b4ac78363de8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.244061 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.244085 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.244105 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/380dda58-7342-44d2-a0e1-b4ac78363de8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.244476 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.245123 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380dda58-7342-44d2-a0e1-b4ac78363de8-config\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.245462 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/380dda58-7342-44d2-a0e1-b4ac78363de8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.245493 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/380dda58-7342-44d2-a0e1-b4ac78363de8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.249161 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.249638 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.252169 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/380dda58-7342-44d2-a0e1-b4ac78363de8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.269190 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82mgp\" (UniqueName: \"kubernetes.io/projected/380dda58-7342-44d2-a0e1-b4ac78363de8-kube-api-access-82mgp\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.273804 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"380dda58-7342-44d2-a0e1-b4ac78363de8\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:53 crc kubenswrapper[4731]: I1129 07:23:53.346662 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.645486 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.649551 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.652693 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.653356 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xzqqh" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.653815 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.656887 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.667161 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.821247 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.821297 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjphj\" (UniqueName: \"kubernetes.io/projected/f78aa165-6d13-419f-b13b-8382a111ded8-kube-api-access-cjphj\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.821327 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f78aa165-6d13-419f-b13b-8382a111ded8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.821634 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.821757 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78aa165-6d13-419f-b13b-8382a111ded8-config\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.821788 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.821847 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.821919 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f78aa165-6d13-419f-b13b-8382a111ded8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.923764 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.923974 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78aa165-6d13-419f-b13b-8382a111ded8-config\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.924031 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.924140 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.924272 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f78aa165-6d13-419f-b13b-8382a111ded8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.924440 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.924470 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjphj\" (UniqueName: \"kubernetes.io/projected/f78aa165-6d13-419f-b13b-8382a111ded8-kube-api-access-cjphj\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.924528 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f78aa165-6d13-419f-b13b-8382a111ded8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.924945 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.925036 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78aa165-6d13-419f-b13b-8382a111ded8-config\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.925233 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f78aa165-6d13-419f-b13b-8382a111ded8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.925605 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f78aa165-6d13-419f-b13b-8382a111ded8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.933086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.933204 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.934775 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78aa165-6d13-419f-b13b-8382a111ded8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.951930 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjphj\" (UniqueName: \"kubernetes.io/projected/f78aa165-6d13-419f-b13b-8382a111ded8-kube-api-access-cjphj\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.960059 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f78aa165-6d13-419f-b13b-8382a111ded8\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:23:56 crc kubenswrapper[4731]: I1129 07:23:56.978387 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 07:24:03 crc kubenswrapper[4731]: I1129 07:24:03.002819 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:24:03 crc kubenswrapper[4731]: I1129 07:24:03.004228 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.410235 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.410971 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfj4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7vm8q_openstack(e788ebde-ec76-4ed3-9179-7cec4adbc16b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.412990 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" podUID="e788ebde-ec76-4ed3-9179-7cec4adbc16b" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.437777 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.437964 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78bhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-rqlhd_openstack(fed10363-f472-4b80-a0f8-ac29acd4c4ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.439122 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" podUID="fed10363-f472-4b80-a0f8-ac29acd4c4ae" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.449216 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.449409 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfgxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8zwzm_openstack(b658f01f-5c13-4bfd-932f-496533c6cec4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.450501 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" podUID="b658f01f-5c13-4bfd-932f-496533c6cec4" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.516478 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.517109 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tscrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-rp9mj_openstack(92c6293a-e74b-4e0b-8384-39dd11b2057c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.518325 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.543547 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" podUID="b658f01f-5c13-4bfd-932f-496533c6cec4" Nov 29 07:24:04 crc kubenswrapper[4731]: E1129 07:24:04.552988 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.238518 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.252085 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.342758 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-config\") pod \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.342808 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-dns-svc\") pod \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.342992 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78bhg\" (UniqueName: \"kubernetes.io/projected/fed10363-f472-4b80-a0f8-ac29acd4c4ae-kube-api-access-78bhg\") pod \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\" (UID: \"fed10363-f472-4b80-a0f8-ac29acd4c4ae\") " Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.343528 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fed10363-f472-4b80-a0f8-ac29acd4c4ae" (UID: "fed10363-f472-4b80-a0f8-ac29acd4c4ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.343583 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-config" (OuterVolumeSpecName: "config") pod "fed10363-f472-4b80-a0f8-ac29acd4c4ae" (UID: "fed10363-f472-4b80-a0f8-ac29acd4c4ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.368092 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed10363-f472-4b80-a0f8-ac29acd4c4ae-kube-api-access-78bhg" (OuterVolumeSpecName: "kube-api-access-78bhg") pod "fed10363-f472-4b80-a0f8-ac29acd4c4ae" (UID: "fed10363-f472-4b80-a0f8-ac29acd4c4ae"). InnerVolumeSpecName "kube-api-access-78bhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.444408 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e788ebde-ec76-4ed3-9179-7cec4adbc16b-config\") pod \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\" (UID: \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\") " Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.444715 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfj4n\" (UniqueName: \"kubernetes.io/projected/e788ebde-ec76-4ed3-9179-7cec4adbc16b-kube-api-access-lfj4n\") pod \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\" (UID: \"e788ebde-ec76-4ed3-9179-7cec4adbc16b\") " Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.445172 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.445190 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed10363-f472-4b80-a0f8-ac29acd4c4ae-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.445202 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78bhg\" (UniqueName: \"kubernetes.io/projected/fed10363-f472-4b80-a0f8-ac29acd4c4ae-kube-api-access-78bhg\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.445214 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e788ebde-ec76-4ed3-9179-7cec4adbc16b-config" (OuterVolumeSpecName: "config") pod "e788ebde-ec76-4ed3-9179-7cec4adbc16b" (UID: "e788ebde-ec76-4ed3-9179-7cec4adbc16b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.448840 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e788ebde-ec76-4ed3-9179-7cec4adbc16b-kube-api-access-lfj4n" (OuterVolumeSpecName: "kube-api-access-lfj4n") pod "e788ebde-ec76-4ed3-9179-7cec4adbc16b" (UID: "e788ebde-ec76-4ed3-9179-7cec4adbc16b"). InnerVolumeSpecName "kube-api-access-lfj4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.495508 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdf9m"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.518673 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.529904 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.542301 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.547477 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e788ebde-ec76-4ed3-9179-7cec4adbc16b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.547519 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfj4n\" (UniqueName: \"kubernetes.io/projected/e788ebde-ec76-4ed3-9179-7cec4adbc16b-kube-api-access-lfj4n\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.550213 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.559925 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.566878 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 07:24:05 crc kubenswrapper[4731]: W1129 07:24:05.568521 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7241db7b_fd6e_431a_b38a_6d3f3404a630.slice/crio-b5bc2f8a248446e98868df724ccda0d65f94684f8d4a410d285d4c40aed1da1b WatchSource:0}: Error finding container b5bc2f8a248446e98868df724ccda0d65f94684f8d4a410d285d4c40aed1da1b: Status 404 returned error can't find the container with id b5bc2f8a248446e98868df724ccda0d65f94684f8d4a410d285d4c40aed1da1b Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.571191 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" event={"ID":"fed10363-f472-4b80-a0f8-ac29acd4c4ae","Type":"ContainerDied","Data":"b509f6339ad586db5cadc970d4f6996be70c9bbeb3f3ca8303aa82882f5b849f"} Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.571236 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rqlhd" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.576696 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" event={"ID":"e788ebde-ec76-4ed3-9179-7cec4adbc16b","Type":"ContainerDied","Data":"d2304262c110dc1483ad096a4c635545e800d88862b9ca487acaac29eaacd4e4"} Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.576703 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7vm8q" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.582879 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdf9m" event={"ID":"3e584c0b-7ce0-45b8-b6a9-60ee16752970","Type":"ContainerStarted","Data":"2b5a6af7c64fdbb6763357c49598a1b99d1dcb52c5bfa067ff6623965c7346ba"} Nov 29 07:24:05 crc kubenswrapper[4731]: W1129 07:24:05.597170 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3b26ece_7b12_4cd4_befd_4d42fa5b55fc.slice/crio-720ff173c67f8ea4d5517615c11ca13d5614f67875c34f55d71642ff29a10dee WatchSource:0}: Error finding container 720ff173c67f8ea4d5517615c11ca13d5614f67875c34f55d71642ff29a10dee: Status 404 returned error can't find the container with id 720ff173c67f8ea4d5517615c11ca13d5614f67875c34f55d71642ff29a10dee Nov 29 07:24:05 crc kubenswrapper[4731]: W1129 07:24:05.598892 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce9f78b3_187a_4988_a15f_fd5b81e07ab4.slice/crio-e4d14c4b11322a9f441efa8329a92f1fbc9ffe0d70955b3af0abe5cbec8f25d9 WatchSource:0}: Error finding container e4d14c4b11322a9f441efa8329a92f1fbc9ffe0d70955b3af0abe5cbec8f25d9: Status 404 returned error can't find the container with id e4d14c4b11322a9f441efa8329a92f1fbc9ffe0d70955b3af0abe5cbec8f25d9 Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.643876 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:24:05 crc kubenswrapper[4731]: W1129 07:24:05.658388 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78aa165_6d13_419f_b13b_8382a111ded8.slice/crio-eaef4632149790c7f1195364d6bc4cc6cbec67373d5760646ad327ed6a8b66d3 WatchSource:0}: Error finding container eaef4632149790c7f1195364d6bc4cc6cbec67373d5760646ad327ed6a8b66d3: Status 404 returned error can't find the container with id eaef4632149790c7f1195364d6bc4cc6cbec67373d5760646ad327ed6a8b66d3 Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.702763 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rqlhd"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.738109 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rqlhd"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.757967 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vm8q"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.765815 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vm8q"] Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.819026 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e788ebde-ec76-4ed3-9179-7cec4adbc16b" path="/var/lib/kubelet/pods/e788ebde-ec76-4ed3-9179-7cec4adbc16b/volumes" Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.819616 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fed10363-f472-4b80-a0f8-ac29acd4c4ae" path="/var/lib/kubelet/pods/fed10363-f472-4b80-a0f8-ac29acd4c4ae/volumes" Nov 29 07:24:05 crc kubenswrapper[4731]: W1129 07:24:05.946389 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda63e01ef_be39_42d9_83e2_a4750d6eb8ba.slice/crio-3662982a878433b968aec01ad551b840d7d0dbddeb1fd367e12d8f6b196ce17a WatchSource:0}: Error finding container 3662982a878433b968aec01ad551b840d7d0dbddeb1fd367e12d8f6b196ce17a: Status 404 returned error can't find the container with id 3662982a878433b968aec01ad551b840d7d0dbddeb1fd367e12d8f6b196ce17a Nov 29 07:24:05 crc kubenswrapper[4731]: I1129 07:24:05.949468 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-slgbx"] Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.601169 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff2928d9-150f-4305-a1bd-6a87ee7b40cc","Type":"ContainerStarted","Data":"d3f55ca44dad32ec6f1b7d50b6ecca8babe0c9408373014eb593770d9b3e6641"} Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.605033 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f78aa165-6d13-419f-b13b-8382a111ded8","Type":"ContainerStarted","Data":"eaef4632149790c7f1195364d6bc4cc6cbec67373d5760646ad327ed6a8b66d3"} Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.608208 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce9f78b3-187a-4988-a15f-fd5b81e07ab4","Type":"ContainerStarted","Data":"e4d14c4b11322a9f441efa8329a92f1fbc9ffe0d70955b3af0abe5cbec8f25d9"} Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.610130 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7971e0f-0e23-4782-9766-4841f04ac1e7","Type":"ContainerStarted","Data":"c33a117aee2f4db73d3dee9f25e8cc2de0898802d935f30f0f6cfc1888c9e387"} Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.615259 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-slgbx" event={"ID":"a63e01ef-be39-42d9-83e2-a4750d6eb8ba","Type":"ContainerStarted","Data":"3662982a878433b968aec01ad551b840d7d0dbddeb1fd367e12d8f6b196ce17a"} Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.619185 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc","Type":"ContainerStarted","Data":"720ff173c67f8ea4d5517615c11ca13d5614f67875c34f55d71642ff29a10dee"} Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.621747 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7241db7b-fd6e-431a-b38a-6d3f3404a630","Type":"ContainerStarted","Data":"b5bc2f8a248446e98868df724ccda0d65f94684f8d4a410d285d4c40aed1da1b"} Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.630100 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d220afda-dc32-49e3-9cae-b9270f077167","Type":"ContainerStarted","Data":"280cbe041ead2a0a908854a8e9da4da9d6be3aaf1596c28ff08cb98c0e6f8ca0"} Nov 29 07:24:06 crc kubenswrapper[4731]: I1129 07:24:06.745917 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:24:06 crc kubenswrapper[4731]: W1129 07:24:06.815930 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod380dda58_7342_44d2_a0e1_b4ac78363de8.slice/crio-99d6bc08ea63900f3bacfb1db348acbfe9006bfd02d2fc4f1a3e44396ab650aa WatchSource:0}: Error finding container 99d6bc08ea63900f3bacfb1db348acbfe9006bfd02d2fc4f1a3e44396ab650aa: Status 404 returned error can't find the container with id 99d6bc08ea63900f3bacfb1db348acbfe9006bfd02d2fc4f1a3e44396ab650aa Nov 29 07:24:07 crc kubenswrapper[4731]: I1129 07:24:07.644845 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"380dda58-7342-44d2-a0e1-b4ac78363de8","Type":"ContainerStarted","Data":"99d6bc08ea63900f3bacfb1db348acbfe9006bfd02d2fc4f1a3e44396ab650aa"} Nov 29 07:24:15 crc kubenswrapper[4731]: I1129 07:24:15.755942 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce9f78b3-187a-4988-a15f-fd5b81e07ab4","Type":"ContainerStarted","Data":"358648189b517c978c2d3312c660b1818c5c8c075452df1ec6659fa0828ae562"} Nov 29 07:24:15 crc kubenswrapper[4731]: I1129 07:24:15.776903 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc","Type":"ContainerStarted","Data":"529e0b90c0fada668dc625124eef4fc57d0022569bf7290f05734a7891b1e0f6"} Nov 29 07:24:15 crc kubenswrapper[4731]: I1129 07:24:15.784266 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d220afda-dc32-49e3-9cae-b9270f077167","Type":"ContainerStarted","Data":"aa430f599f920a767839b1a223a45c1d3008f84fd586c10fa1ad1615b490d7ac"} Nov 29 07:24:15 crc kubenswrapper[4731]: I1129 07:24:15.784994 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 29 07:24:15 crc kubenswrapper[4731]: I1129 07:24:15.799290 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f78aa165-6d13-419f-b13b-8382a111ded8","Type":"ContainerStarted","Data":"bdd2f5d971a60443868fe46dc6aa51a9d8e3c4db42b5669cade0976b73e91742"} Nov 29 07:24:15 crc kubenswrapper[4731]: I1129 07:24:15.840303 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.63569631 podStartE2EDuration="28.840273802s" podCreationTimestamp="2025-11-29 07:23:47 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.583053995 +0000 UTC m=+1084.473415098" lastFinishedPulling="2025-11-29 07:24:14.787631487 +0000 UTC m=+1093.677992590" observedRunningTime="2025-11-29 07:24:15.829924359 +0000 UTC m=+1094.720285462" watchObservedRunningTime="2025-11-29 07:24:15.840273802 +0000 UTC m=+1094.730634905" Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.812042 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdf9m" event={"ID":"3e584c0b-7ce0-45b8-b6a9-60ee16752970","Type":"ContainerStarted","Data":"ab9d1f15f17903febe541b48f326eb41ac570fa06f0c42bb2d61a6cb91ebf649"} Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.812392 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hdf9m" Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.814912 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7971e0f-0e23-4782-9766-4841f04ac1e7","Type":"ContainerStarted","Data":"f40118db8ab07db8de5595473f72aed1dea64c65ae58bf29725a18caee3c64bc"} Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.818022 4731 generic.go:334] "Generic (PLEG): container finished" podID="a63e01ef-be39-42d9-83e2-a4750d6eb8ba" containerID="e47382512371211a391423ad434307d57d450a4df2d550042906a63d0b86548c" exitCode=0 Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.818103 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-slgbx" event={"ID":"a63e01ef-be39-42d9-83e2-a4750d6eb8ba","Type":"ContainerDied","Data":"e47382512371211a391423ad434307d57d450a4df2d550042906a63d0b86548c"} Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.821036 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"380dda58-7342-44d2-a0e1-b4ac78363de8","Type":"ContainerStarted","Data":"53ff9f08f113dc9b67e05237c7754111d480d48391c11b04c54656f015a20aa5"} Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.823435 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7241db7b-fd6e-431a-b38a-6d3f3404a630","Type":"ContainerStarted","Data":"1bcc91bcdbeb0a72ea88a3a4f9801261f1df5f6a980ad1e1d5a2de646d0ce7fb"} Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.835452 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hdf9m" podStartSLOduration=15.124733302 podStartE2EDuration="24.835433937s" podCreationTimestamp="2025-11-29 07:23:52 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.506729775 +0000 UTC m=+1084.397090868" lastFinishedPulling="2025-11-29 07:24:15.2174304 +0000 UTC m=+1094.107791503" observedRunningTime="2025-11-29 07:24:16.834069517 +0000 UTC m=+1095.724430620" watchObservedRunningTime="2025-11-29 07:24:16.835433937 +0000 UTC m=+1095.725795040" Nov 29 07:24:16 crc kubenswrapper[4731]: I1129 07:24:16.878538 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.166552084 podStartE2EDuration="28.878520026s" podCreationTimestamp="2025-11-29 07:23:48 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.593670095 +0000 UTC m=+1084.484031188" lastFinishedPulling="2025-11-29 07:24:15.305638027 +0000 UTC m=+1094.195999130" observedRunningTime="2025-11-29 07:24:16.876889088 +0000 UTC m=+1095.767250191" watchObservedRunningTime="2025-11-29 07:24:16.878520026 +0000 UTC m=+1095.768881129" Nov 29 07:24:17 crc kubenswrapper[4731]: I1129 07:24:17.838401 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-slgbx" event={"ID":"a63e01ef-be39-42d9-83e2-a4750d6eb8ba","Type":"ContainerStarted","Data":"084e3c4581bdee35485bd5f2296e2fe60ce38415ea9b53454604253ba34c43e8"} Nov 29 07:24:17 crc kubenswrapper[4731]: I1129 07:24:17.839240 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-slgbx" event={"ID":"a63e01ef-be39-42d9-83e2-a4750d6eb8ba","Type":"ContainerStarted","Data":"98abceb6124478299f8c3ff19f6d949c0dd937198e505b3cca6d9bdbff075d86"} Nov 29 07:24:17 crc kubenswrapper[4731]: I1129 07:24:17.839261 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:24:17 crc kubenswrapper[4731]: I1129 07:24:17.839277 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:24:17 crc kubenswrapper[4731]: I1129 07:24:17.840872 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff2928d9-150f-4305-a1bd-6a87ee7b40cc","Type":"ContainerStarted","Data":"0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4"} Nov 29 07:24:17 crc kubenswrapper[4731]: I1129 07:24:17.841609 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 07:24:17 crc kubenswrapper[4731]: I1129 07:24:17.890459 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-slgbx" podStartSLOduration=16.623417933 podStartE2EDuration="25.890441179s" podCreationTimestamp="2025-11-29 07:23:52 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.949820657 +0000 UTC m=+1084.840181760" lastFinishedPulling="2025-11-29 07:24:15.216843903 +0000 UTC m=+1094.107205006" observedRunningTime="2025-11-29 07:24:17.861525065 +0000 UTC m=+1096.751886178" watchObservedRunningTime="2025-11-29 07:24:17.890441179 +0000 UTC m=+1096.780802282" Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.860639 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"380dda58-7342-44d2-a0e1-b4ac78363de8","Type":"ContainerStarted","Data":"181fc39cd70871f246bdaf6de12207b8c6be2df3c77a75ce06594dcef5830bf6"} Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.863847 4731 generic.go:334] "Generic (PLEG): container finished" podID="c3b26ece-7b12-4cd4-befd-4d42fa5b55fc" containerID="529e0b90c0fada668dc625124eef4fc57d0022569bf7290f05734a7891b1e0f6" exitCode=0 Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.863956 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc","Type":"ContainerDied","Data":"529e0b90c0fada668dc625124eef4fc57d0022569bf7290f05734a7891b1e0f6"} Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.871604 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f78aa165-6d13-419f-b13b-8382a111ded8","Type":"ContainerStarted","Data":"c422c14f300d0f2747c908d49aea8dc7bbb994bc49f05da36bc818005fd70cdb"} Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.874318 4731 generic.go:334] "Generic (PLEG): container finished" podID="92c6293a-e74b-4e0b-8384-39dd11b2057c" containerID="901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a" exitCode=0 Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.874528 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" event={"ID":"92c6293a-e74b-4e0b-8384-39dd11b2057c","Type":"ContainerDied","Data":"901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a"} Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.878764 4731 generic.go:334] "Generic (PLEG): container finished" podID="ce9f78b3-187a-4988-a15f-fd5b81e07ab4" containerID="358648189b517c978c2d3312c660b1818c5c8c075452df1ec6659fa0828ae562" exitCode=0 Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.878915 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce9f78b3-187a-4988-a15f-fd5b81e07ab4","Type":"ContainerDied","Data":"358648189b517c978c2d3312c660b1818c5c8c075452df1ec6659fa0828ae562"} Nov 29 07:24:19 crc kubenswrapper[4731]: I1129 07:24:19.892941 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.759410528 podStartE2EDuration="28.892912095s" podCreationTimestamp="2025-11-29 07:23:51 +0000 UTC" firstStartedPulling="2025-11-29 07:24:06.820378952 +0000 UTC m=+1085.710740055" lastFinishedPulling="2025-11-29 07:24:18.953880519 +0000 UTC m=+1097.844241622" observedRunningTime="2025-11-29 07:24:19.885273682 +0000 UTC m=+1098.775634785" watchObservedRunningTime="2025-11-29 07:24:19.892912095 +0000 UTC m=+1098.783273198" Nov 29 07:24:20 crc kubenswrapper[4731]: I1129 07:24:20.005552 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.698473582 podStartE2EDuration="25.004861145s" podCreationTimestamp="2025-11-29 07:23:55 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.662923007 +0000 UTC m=+1084.553284110" lastFinishedPulling="2025-11-29 07:24:18.96931057 +0000 UTC m=+1097.859671673" observedRunningTime="2025-11-29 07:24:19.995254564 +0000 UTC m=+1098.885615687" watchObservedRunningTime="2025-11-29 07:24:20.004861145 +0000 UTC m=+1098.895222248" Nov 29 07:24:20 crc kubenswrapper[4731]: I1129 07:24:20.347797 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 29 07:24:20 crc kubenswrapper[4731]: I1129 07:24:20.402350 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 29 07:24:20 crc kubenswrapper[4731]: I1129 07:24:20.892186 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 29 07:24:20 crc kubenswrapper[4731]: I1129 07:24:20.939928 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 29 07:24:20 crc kubenswrapper[4731]: I1129 07:24:20.977843 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.027899 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.191652 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rp9mj"] Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.222090 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hln8c"] Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.223608 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: W1129 07:24:21.229192 4731 reflector.go:561] object-"openstack"/"ovsdbserver-nb": failed to list *v1.ConfigMap: configmaps "ovsdbserver-nb" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 29 07:24:21 crc kubenswrapper[4731]: E1129 07:24:21.229248 4731 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-nb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovsdbserver-nb\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.248731 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ltcz5"] Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.249922 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.254731 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.269804 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.269871 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47g2z\" (UniqueName: \"kubernetes.io/projected/0408017d-3976-46ab-b78d-4116a24c33d9-kube-api-access-47g2z\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.269921 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.269949 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-config\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.279745 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hln8c"] Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.284206 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ltcz5"] Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.371556 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.371674 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs9b8\" (UniqueName: \"kubernetes.io/projected/875e6ad8-8a38-4943-8a25-47761929dfc7-kube-api-access-bs9b8\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.371728 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47g2z\" (UniqueName: \"kubernetes.io/projected/0408017d-3976-46ab-b78d-4116a24c33d9-kube-api-access-47g2z\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.371770 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/875e6ad8-8a38-4943-8a25-47761929dfc7-ovn-rundir\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.371817 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.372747 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.373026 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875e6ad8-8a38-4943-8a25-47761929dfc7-config\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.373169 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-config\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.373342 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875e6ad8-8a38-4943-8a25-47761929dfc7-combined-ca-bundle\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.373409 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/875e6ad8-8a38-4943-8a25-47761929dfc7-ovs-rundir\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.373578 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/875e6ad8-8a38-4943-8a25-47761929dfc7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.374780 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-config\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.402842 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47g2z\" (UniqueName: \"kubernetes.io/projected/0408017d-3976-46ab-b78d-4116a24c33d9-kube-api-access-47g2z\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.475343 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875e6ad8-8a38-4943-8a25-47761929dfc7-combined-ca-bundle\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.475417 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/875e6ad8-8a38-4943-8a25-47761929dfc7-ovs-rundir\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.475492 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/875e6ad8-8a38-4943-8a25-47761929dfc7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.475552 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs9b8\" (UniqueName: \"kubernetes.io/projected/875e6ad8-8a38-4943-8a25-47761929dfc7-kube-api-access-bs9b8\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.475666 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/875e6ad8-8a38-4943-8a25-47761929dfc7-ovn-rundir\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.475720 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875e6ad8-8a38-4943-8a25-47761929dfc7-config\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.476897 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875e6ad8-8a38-4943-8a25-47761929dfc7-config\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.478373 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/875e6ad8-8a38-4943-8a25-47761929dfc7-ovs-rundir\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.478466 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/875e6ad8-8a38-4943-8a25-47761929dfc7-ovn-rundir\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.481349 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875e6ad8-8a38-4943-8a25-47761929dfc7-combined-ca-bundle\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.481719 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/875e6ad8-8a38-4943-8a25-47761929dfc7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.500287 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs9b8\" (UniqueName: \"kubernetes.io/projected/875e6ad8-8a38-4943-8a25-47761929dfc7-kube-api-access-bs9b8\") pod \"ovn-controller-metrics-ltcz5\" (UID: \"875e6ad8-8a38-4943-8a25-47761929dfc7\") " pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.509644 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zwzm"] Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.552545 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-t7xmj"] Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.554263 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.557022 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.566816 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t7xmj"] Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.570959 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ltcz5" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.678630 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.678693 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-dns-svc\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.678748 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-config\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.678812 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcnj\" (UniqueName: \"kubernetes.io/projected/6be24ad5-a68a-41a3-8622-6b9bc69d4943-kube-api-access-phcnj\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.678883 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.780952 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phcnj\" (UniqueName: \"kubernetes.io/projected/6be24ad5-a68a-41a3-8622-6b9bc69d4943-kube-api-access-phcnj\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.781462 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.781529 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.781558 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-dns-svc\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.781620 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-config\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.782991 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-config\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.783018 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.783066 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-dns-svc\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.803146 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phcnj\" (UniqueName: \"kubernetes.io/projected/6be24ad5-a68a-41a3-8622-6b9bc69d4943-kube-api-access-phcnj\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.859353 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ltcz5"] Nov 29 07:24:21 crc kubenswrapper[4731]: W1129 07:24:21.863375 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod875e6ad8_8a38_4943_8a25_47761929dfc7.slice/crio-74b7cd4a7b2181e7878941d17753a440be0f4375ad74320c911dfc0910beb6f9 WatchSource:0}: Error finding container 74b7cd4a7b2181e7878941d17753a440be0f4375ad74320c911dfc0910beb6f9: Status 404 returned error can't find the container with id 74b7cd4a7b2181e7878941d17753a440be0f4375ad74320c911dfc0910beb6f9 Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.901285 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ltcz5" event={"ID":"875e6ad8-8a38-4943-8a25-47761929dfc7","Type":"ContainerStarted","Data":"74b7cd4a7b2181e7878941d17753a440be0f4375ad74320c911dfc0910beb6f9"} Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.901516 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 29 07:24:21 crc kubenswrapper[4731]: I1129 07:24:21.955210 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.115968 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.118221 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.122181 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.122339 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-sn7wd" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.122616 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.125378 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.139198 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.195192 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/850b98c3-0079-4cae-a69a-1c0ee903ba53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.195252 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/850b98c3-0079-4cae-a69a-1c0ee903ba53-scripts\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.195295 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95cz\" (UniqueName: \"kubernetes.io/projected/850b98c3-0079-4cae-a69a-1c0ee903ba53-kube-api-access-j95cz\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.195399 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.195519 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.195553 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850b98c3-0079-4cae-a69a-1c0ee903ba53-config\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.195683 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.297734 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.297834 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.297863 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850b98c3-0079-4cae-a69a-1c0ee903ba53-config\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.297918 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.298027 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/850b98c3-0079-4cae-a69a-1c0ee903ba53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.298067 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/850b98c3-0079-4cae-a69a-1c0ee903ba53-scripts\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.298104 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95cz\" (UniqueName: \"kubernetes.io/projected/850b98c3-0079-4cae-a69a-1c0ee903ba53-kube-api-access-j95cz\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.298914 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/850b98c3-0079-4cae-a69a-1c0ee903ba53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.299443 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850b98c3-0079-4cae-a69a-1c0ee903ba53-config\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.301653 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/850b98c3-0079-4cae-a69a-1c0ee903ba53-scripts\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.303671 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.304027 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.304069 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/850b98c3-0079-4cae-a69a-1c0ee903ba53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.317128 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95cz\" (UniqueName: \"kubernetes.io/projected/850b98c3-0079-4cae-a69a-1c0ee903ba53-kube-api-access-j95cz\") pod \"ovn-northd-0\" (UID: \"850b98c3-0079-4cae-a69a-1c0ee903ba53\") " pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: E1129 07:24:22.371958 4731 configmap.go:193] Couldn't get configMap openstack/ovsdbserver-nb: failed to sync configmap cache: timed out waiting for the condition Nov 29 07:24:22 crc kubenswrapper[4731]: E1129 07:24:22.372047 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb podName:0408017d-3976-46ab-b78d-4116a24c33d9 nodeName:}" failed. No retries permitted until 2025-11-29 07:24:22.872025951 +0000 UTC m=+1101.762387064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb") pod "dnsmasq-dns-5bf47b49b7-hln8c" (UID: "0408017d-3976-46ab-b78d-4116a24c33d9") : failed to sync configmap cache: timed out waiting for the condition Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.438376 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.513840 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.672688 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.682522 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-t7xmj\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.775021 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.911725 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.915229 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-hln8c\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:22 crc kubenswrapper[4731]: W1129 07:24:22.941588 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod850b98c3_0079_4cae_a69a_1c0ee903ba53.slice/crio-314624dda0cb93712a4aef4060af62059d4e886f1a3e3bcce18242d00e471778 WatchSource:0}: Error finding container 314624dda0cb93712a4aef4060af62059d4e886f1a3e3bcce18242d00e471778: Status 404 returned error can't find the container with id 314624dda0cb93712a4aef4060af62059d4e886f1a3e3bcce18242d00e471778 Nov 29 07:24:22 crc kubenswrapper[4731]: I1129 07:24:22.944794 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:24:23 crc kubenswrapper[4731]: I1129 07:24:23.045200 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:23 crc kubenswrapper[4731]: I1129 07:24:23.248328 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t7xmj"] Nov 29 07:24:23 crc kubenswrapper[4731]: I1129 07:24:23.298812 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hln8c"] Nov 29 07:24:23 crc kubenswrapper[4731]: W1129 07:24:23.304424 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0408017d_3976_46ab_b78d_4116a24c33d9.slice/crio-60556cf32b76f189d9cded06230ecbfc2e8895885bf87d091c896017a3715c1d WatchSource:0}: Error finding container 60556cf32b76f189d9cded06230ecbfc2e8895885bf87d091c896017a3715c1d: Status 404 returned error can't find the container with id 60556cf32b76f189d9cded06230ecbfc2e8895885bf87d091c896017a3715c1d Nov 29 07:24:23 crc kubenswrapper[4731]: I1129 07:24:23.929871 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"850b98c3-0079-4cae-a69a-1c0ee903ba53","Type":"ContainerStarted","Data":"314624dda0cb93712a4aef4060af62059d4e886f1a3e3bcce18242d00e471778"} Nov 29 07:24:23 crc kubenswrapper[4731]: I1129 07:24:23.933856 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t7xmj" event={"ID":"6be24ad5-a68a-41a3-8622-6b9bc69d4943","Type":"ContainerStarted","Data":"4ecaa6423d0ef58ffe1926366a56bca026f7dd021e3822c94853c32394133b08"} Nov 29 07:24:23 crc kubenswrapper[4731]: I1129 07:24:23.935327 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" event={"ID":"0408017d-3976-46ab-b78d-4116a24c33d9","Type":"ContainerStarted","Data":"60556cf32b76f189d9cded06230ecbfc2e8895885bf87d091c896017a3715c1d"} Nov 29 07:24:24 crc kubenswrapper[4731]: I1129 07:24:24.948325 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce9f78b3-187a-4988-a15f-fd5b81e07ab4","Type":"ContainerStarted","Data":"ac591f7d27a4f438c2278a9879dd1e911461cbb8204362345728ad1bffb89a99"} Nov 29 07:24:24 crc kubenswrapper[4731]: I1129 07:24:24.952876 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c3b26ece-7b12-4cd4-befd-4d42fa5b55fc","Type":"ContainerStarted","Data":"1152611d8cc7ab704dc5d4884b0da7e7a993b4ca5059e461e6c739ebe7711bec"} Nov 29 07:24:24 crc kubenswrapper[4731]: I1129 07:24:24.980774 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.798322267 podStartE2EDuration="39.980748482s" podCreationTimestamp="2025-11-29 07:23:45 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.604556873 +0000 UTC m=+1084.494917976" lastFinishedPulling="2025-11-29 07:24:14.786983088 +0000 UTC m=+1093.677344191" observedRunningTime="2025-11-29 07:24:24.978355092 +0000 UTC m=+1103.868716195" watchObservedRunningTime="2025-11-29 07:24:24.980748482 +0000 UTC m=+1103.871109585" Nov 29 07:24:25 crc kubenswrapper[4731]: I1129 07:24:25.008185 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=31.393663468 podStartE2EDuration="41.008159453s" podCreationTimestamp="2025-11-29 07:23:44 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.601505884 +0000 UTC m=+1084.491866987" lastFinishedPulling="2025-11-29 07:24:15.216001859 +0000 UTC m=+1094.106362972" observedRunningTime="2025-11-29 07:24:25.000290783 +0000 UTC m=+1103.890651896" watchObservedRunningTime="2025-11-29 07:24:25.008159453 +0000 UTC m=+1103.898520556" Nov 29 07:24:25 crc kubenswrapper[4731]: I1129 07:24:25.930073 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 29 07:24:25 crc kubenswrapper[4731]: I1129 07:24:25.930633 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 29 07:24:26 crc kubenswrapper[4731]: I1129 07:24:26.974037 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" containerName="dnsmasq-dns" containerID="cri-o://034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33" gracePeriod=10 Nov 29 07:24:26 crc kubenswrapper[4731]: I1129 07:24:26.974063 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" event={"ID":"92c6293a-e74b-4e0b-8384-39dd11b2057c","Type":"ContainerStarted","Data":"034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33"} Nov 29 07:24:26 crc kubenswrapper[4731]: I1129 07:24:26.974659 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:24:26 crc kubenswrapper[4731]: I1129 07:24:26.984066 4731 generic.go:334] "Generic (PLEG): container finished" podID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" containerID="d2427ceb8ecf7c1a5539267040bd6fbbbcb78aedfc22efcecd34edaf0cded315" exitCode=0 Nov 29 07:24:26 crc kubenswrapper[4731]: I1129 07:24:26.984183 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t7xmj" event={"ID":"6be24ad5-a68a-41a3-8622-6b9bc69d4943","Type":"ContainerDied","Data":"d2427ceb8ecf7c1a5539267040bd6fbbbcb78aedfc22efcecd34edaf0cded315"} Nov 29 07:24:26 crc kubenswrapper[4731]: I1129 07:24:26.988520 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ltcz5" event={"ID":"875e6ad8-8a38-4943-8a25-47761929dfc7","Type":"ContainerStarted","Data":"0cc0209a0d39dd5f32f82d72ef3a035aa90e3a929f16634c1f0783ee03301fc2"} Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.002453 4731 generic.go:334] "Generic (PLEG): container finished" podID="0408017d-3976-46ab-b78d-4116a24c33d9" containerID="48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05" exitCode=0 Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.004025 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" event={"ID":"0408017d-3976-46ab-b78d-4116a24c33d9","Type":"ContainerDied","Data":"48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05"} Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.007432 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" podStartSLOduration=9.875927318 podStartE2EDuration="45.007407454s" podCreationTimestamp="2025-11-29 07:23:42 +0000 UTC" firstStartedPulling="2025-11-29 07:23:43.819479168 +0000 UTC m=+1062.709840271" lastFinishedPulling="2025-11-29 07:24:18.950959304 +0000 UTC m=+1097.841320407" observedRunningTime="2025-11-29 07:24:26.999586516 +0000 UTC m=+1105.889947629" watchObservedRunningTime="2025-11-29 07:24:27.007407454 +0000 UTC m=+1105.897768557" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.037616 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ltcz5" podStartSLOduration=6.037536224 podStartE2EDuration="6.037536224s" podCreationTimestamp="2025-11-29 07:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:27.01890518 +0000 UTC m=+1105.909266283" watchObservedRunningTime="2025-11-29 07:24:27.037536224 +0000 UTC m=+1105.927897347" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.203278 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.203331 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.688548 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.822475 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-dns-svc\") pod \"92c6293a-e74b-4e0b-8384-39dd11b2057c\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.822693 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-config\") pod \"92c6293a-e74b-4e0b-8384-39dd11b2057c\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.822737 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tscrb\" (UniqueName: \"kubernetes.io/projected/92c6293a-e74b-4e0b-8384-39dd11b2057c-kube-api-access-tscrb\") pod \"92c6293a-e74b-4e0b-8384-39dd11b2057c\" (UID: \"92c6293a-e74b-4e0b-8384-39dd11b2057c\") " Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.838516 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c6293a-e74b-4e0b-8384-39dd11b2057c-kube-api-access-tscrb" (OuterVolumeSpecName: "kube-api-access-tscrb") pod "92c6293a-e74b-4e0b-8384-39dd11b2057c" (UID: "92c6293a-e74b-4e0b-8384-39dd11b2057c"). InnerVolumeSpecName "kube-api-access-tscrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.905943 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92c6293a-e74b-4e0b-8384-39dd11b2057c" (UID: "92c6293a-e74b-4e0b-8384-39dd11b2057c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.926418 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tscrb\" (UniqueName: \"kubernetes.io/projected/92c6293a-e74b-4e0b-8384-39dd11b2057c-kube-api-access-tscrb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.926457 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:27 crc kubenswrapper[4731]: I1129 07:24:27.940899 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-config" (OuterVolumeSpecName: "config") pod "92c6293a-e74b-4e0b-8384-39dd11b2057c" (UID: "92c6293a-e74b-4e0b-8384-39dd11b2057c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.028618 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c6293a-e74b-4e0b-8384-39dd11b2057c-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.030792 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t7xmj" event={"ID":"6be24ad5-a68a-41a3-8622-6b9bc69d4943","Type":"ContainerStarted","Data":"a1f7519aa5fffd1e311b44a94c840d3bd3832965c5d81114fd134864f149131b"} Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.030862 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.040545 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" event={"ID":"0408017d-3976-46ab-b78d-4116a24c33d9","Type":"ContainerStarted","Data":"6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248"} Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.042838 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.045093 4731 generic.go:334] "Generic (PLEG): container finished" podID="b658f01f-5c13-4bfd-932f-496533c6cec4" containerID="a1f9f273c6dd6bebefb873572524732e92412d7d9cdf9270a28f6d6dee824c81" exitCode=0 Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.045258 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" event={"ID":"b658f01f-5c13-4bfd-932f-496533c6cec4","Type":"ContainerDied","Data":"a1f9f273c6dd6bebefb873572524732e92412d7d9cdf9270a28f6d6dee824c81"} Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.053496 4731 generic.go:334] "Generic (PLEG): container finished" podID="92c6293a-e74b-4e0b-8384-39dd11b2057c" containerID="034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33" exitCode=0 Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.055292 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.055975 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" event={"ID":"92c6293a-e74b-4e0b-8384-39dd11b2057c","Type":"ContainerDied","Data":"034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33"} Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.056038 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rp9mj" event={"ID":"92c6293a-e74b-4e0b-8384-39dd11b2057c","Type":"ContainerDied","Data":"86bc619eefd9cd6f4482f4b822d01f0528a95df86575971054b78870fe142731"} Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.056063 4731 scope.go:117] "RemoveContainer" containerID="034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.071821 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-t7xmj" podStartSLOduration=7.07178778 podStartE2EDuration="7.07178778s" podCreationTimestamp="2025-11-29 07:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:28.06491379 +0000 UTC m=+1106.955274923" watchObservedRunningTime="2025-11-29 07:24:28.07178778 +0000 UTC m=+1106.962148883" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.103680 4731 scope.go:117] "RemoveContainer" containerID="901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.138963 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" podStartSLOduration=7.138924911 podStartE2EDuration="7.138924911s" podCreationTimestamp="2025-11-29 07:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:28.115864508 +0000 UTC m=+1107.006225611" watchObservedRunningTime="2025-11-29 07:24:28.138924911 +0000 UTC m=+1107.029286014" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.143236 4731 scope.go:117] "RemoveContainer" containerID="034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33" Nov 29 07:24:28 crc kubenswrapper[4731]: E1129 07:24:28.153337 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33\": container with ID starting with 034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33 not found: ID does not exist" containerID="034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.153413 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33"} err="failed to get container status \"034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33\": rpc error: code = NotFound desc = could not find container \"034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33\": container with ID starting with 034354b0da264b21b697727e7d3da15e4f724edc273a4e6990488e5aad332a33 not found: ID does not exist" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.153446 4731 scope.go:117] "RemoveContainer" containerID="901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a" Nov 29 07:24:28 crc kubenswrapper[4731]: E1129 07:24:28.154044 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a\": container with ID starting with 901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a not found: ID does not exist" containerID="901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.154074 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a"} err="failed to get container status \"901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a\": rpc error: code = NotFound desc = could not find container \"901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a\": container with ID starting with 901fac4de3cc88b4d6875531e4caf14f6fabdced2af80b5f54f0f942014ebe7a not found: ID does not exist" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.162057 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rp9mj"] Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.169829 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rp9mj"] Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.411753 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.537905 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-config\") pod \"b658f01f-5c13-4bfd-932f-496533c6cec4\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.538548 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfgxr\" (UniqueName: \"kubernetes.io/projected/b658f01f-5c13-4bfd-932f-496533c6cec4-kube-api-access-cfgxr\") pod \"b658f01f-5c13-4bfd-932f-496533c6cec4\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.538752 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-dns-svc\") pod \"b658f01f-5c13-4bfd-932f-496533c6cec4\" (UID: \"b658f01f-5c13-4bfd-932f-496533c6cec4\") " Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.544611 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b658f01f-5c13-4bfd-932f-496533c6cec4-kube-api-access-cfgxr" (OuterVolumeSpecName: "kube-api-access-cfgxr") pod "b658f01f-5c13-4bfd-932f-496533c6cec4" (UID: "b658f01f-5c13-4bfd-932f-496533c6cec4"). InnerVolumeSpecName "kube-api-access-cfgxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.561310 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-config" (OuterVolumeSpecName: "config") pod "b658f01f-5c13-4bfd-932f-496533c6cec4" (UID: "b658f01f-5c13-4bfd-932f-496533c6cec4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.563630 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b658f01f-5c13-4bfd-932f-496533c6cec4" (UID: "b658f01f-5c13-4bfd-932f-496533c6cec4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.640952 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.641240 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658f01f-5c13-4bfd-932f-496533c6cec4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:28 crc kubenswrapper[4731]: I1129 07:24:28.641311 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfgxr\" (UniqueName: \"kubernetes.io/projected/b658f01f-5c13-4bfd-932f-496533c6cec4-kube-api-access-cfgxr\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.062751 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.062800 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8zwzm" event={"ID":"b658f01f-5c13-4bfd-932f-496533c6cec4","Type":"ContainerDied","Data":"26ca1bd037acb90509fb3a273dd92c3f256e85f9f860a3fa9e052000d5ba51c2"} Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.062900 4731 scope.go:117] "RemoveContainer" containerID="a1f9f273c6dd6bebefb873572524732e92412d7d9cdf9270a28f6d6dee824c81" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.070731 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"850b98c3-0079-4cae-a69a-1c0ee903ba53","Type":"ContainerStarted","Data":"e246d4e520381d83166c5625298fb51235a27a89b6175be219b6899fa4cc7eee"} Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.071099 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"850b98c3-0079-4cae-a69a-1c0ee903ba53","Type":"ContainerStarted","Data":"b223e72cd626cec82a4cc53b0cf30f5de4bc9a438a26c1c2c5aee2b8449ff44f"} Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.071741 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.098356 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.300572937 podStartE2EDuration="7.098338253s" podCreationTimestamp="2025-11-29 07:24:22 +0000 UTC" firstStartedPulling="2025-11-29 07:24:22.945012056 +0000 UTC m=+1101.835373159" lastFinishedPulling="2025-11-29 07:24:27.742777372 +0000 UTC m=+1106.633138475" observedRunningTime="2025-11-29 07:24:29.089164435 +0000 UTC m=+1107.979525538" watchObservedRunningTime="2025-11-29 07:24:29.098338253 +0000 UTC m=+1107.988699356" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.162968 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zwzm"] Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.176782 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zwzm"] Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.279041 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.429596 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hln8c"] Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.469274 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r66tt"] Nov 29 07:24:29 crc kubenswrapper[4731]: E1129 07:24:29.469599 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" containerName="dnsmasq-dns" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.469633 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" containerName="dnsmasq-dns" Nov 29 07:24:29 crc kubenswrapper[4731]: E1129 07:24:29.469658 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b658f01f-5c13-4bfd-932f-496533c6cec4" containerName="init" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.469665 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b658f01f-5c13-4bfd-932f-496533c6cec4" containerName="init" Nov 29 07:24:29 crc kubenswrapper[4731]: E1129 07:24:29.469684 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" containerName="init" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.469690 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" containerName="init" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.469844 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" containerName="dnsmasq-dns" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.469866 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b658f01f-5c13-4bfd-932f-496533c6cec4" containerName="init" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.470683 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.481479 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r66tt"] Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.562482 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.562621 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-config\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.562697 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.562748 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.562766 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9d7n\" (UniqueName: \"kubernetes.io/projected/4716973e-a6ae-4baf-bb88-5436489c5451-kube-api-access-d9d7n\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.649011 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.664700 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.664792 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-config\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.664872 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.664929 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.664958 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9d7n\" (UniqueName: \"kubernetes.io/projected/4716973e-a6ae-4baf-bb88-5436489c5451-kube-api-access-d9d7n\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.666015 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.666172 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.666316 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.668876 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-config\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.689467 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9d7n\" (UniqueName: \"kubernetes.io/projected/4716973e-a6ae-4baf-bb88-5436489c5451-kube-api-access-d9d7n\") pod \"dnsmasq-dns-b8fbc5445-r66tt\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.756615 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.794766 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.820598 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c6293a-e74b-4e0b-8384-39dd11b2057c" path="/var/lib/kubelet/pods/92c6293a-e74b-4e0b-8384-39dd11b2057c/volumes" Nov 29 07:24:29 crc kubenswrapper[4731]: I1129 07:24:29.821255 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b658f01f-5c13-4bfd-932f-496533c6cec4" path="/var/lib/kubelet/pods/b658f01f-5c13-4bfd-932f-496533c6cec4/volumes" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.081672 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" podUID="0408017d-3976-46ab-b78d-4116a24c33d9" containerName="dnsmasq-dns" containerID="cri-o://6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248" gracePeriod=10 Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.266580 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r66tt"] Nov 29 07:24:30 crc kubenswrapper[4731]: W1129 07:24:30.300173 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4716973e_a6ae_4baf_bb88_5436489c5451.slice/crio-6a150fe7a3025d826430f403b714dd71c126ea975556bc6fbc4a56fe1a902aef WatchSource:0}: Error finding container 6a150fe7a3025d826430f403b714dd71c126ea975556bc6fbc4a56fe1a902aef: Status 404 returned error can't find the container with id 6a150fe7a3025d826430f403b714dd71c126ea975556bc6fbc4a56fe1a902aef Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.507769 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.515115 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.517371 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.518204 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fx9ph" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.518485 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.518663 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.534407 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.540125 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.594171 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-config\") pod \"0408017d-3976-46ab-b78d-4116a24c33d9\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.594238 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb\") pod \"0408017d-3976-46ab-b78d-4116a24c33d9\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.594419 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-dns-svc\") pod \"0408017d-3976-46ab-b78d-4116a24c33d9\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.594519 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47g2z\" (UniqueName: \"kubernetes.io/projected/0408017d-3976-46ab-b78d-4116a24c33d9-kube-api-access-47g2z\") pod \"0408017d-3976-46ab-b78d-4116a24c33d9\" (UID: \"0408017d-3976-46ab-b78d-4116a24c33d9\") " Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.594878 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.594948 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/739c0608-5471-42a6-b062-4355cd1894a0-lock\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.594994 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.595054 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmsbw\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-kube-api-access-pmsbw\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.595149 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/739c0608-5471-42a6-b062-4355cd1894a0-cache\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.601120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0408017d-3976-46ab-b78d-4116a24c33d9-kube-api-access-47g2z" (OuterVolumeSpecName: "kube-api-access-47g2z") pod "0408017d-3976-46ab-b78d-4116a24c33d9" (UID: "0408017d-3976-46ab-b78d-4116a24c33d9"). InnerVolumeSpecName "kube-api-access-47g2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.665047 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0408017d-3976-46ab-b78d-4116a24c33d9" (UID: "0408017d-3976-46ab-b78d-4116a24c33d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.668585 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-config" (OuterVolumeSpecName: "config") pod "0408017d-3976-46ab-b78d-4116a24c33d9" (UID: "0408017d-3976-46ab-b78d-4116a24c33d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.673065 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0408017d-3976-46ab-b78d-4116a24c33d9" (UID: "0408017d-3976-46ab-b78d-4116a24c33d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697449 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/739c0608-5471-42a6-b062-4355cd1894a0-cache\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697633 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697670 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/739c0608-5471-42a6-b062-4355cd1894a0-lock\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697695 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697738 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmsbw\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-kube-api-access-pmsbw\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697831 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47g2z\" (UniqueName: \"kubernetes.io/projected/0408017d-3976-46ab-b78d-4116a24c33d9-kube-api-access-47g2z\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697847 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697856 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.697866 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0408017d-3976-46ab-b78d-4116a24c33d9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:30 crc kubenswrapper[4731]: E1129 07:24:30.697962 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:24:30 crc kubenswrapper[4731]: E1129 07:24:30.698006 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:24:30 crc kubenswrapper[4731]: E1129 07:24:30.698075 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift podName:739c0608-5471-42a6-b062-4355cd1894a0 nodeName:}" failed. No retries permitted until 2025-11-29 07:24:31.198055055 +0000 UTC m=+1110.088416158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift") pod "swift-storage-0" (UID: "739c0608-5471-42a6-b062-4355cd1894a0") : configmap "swift-ring-files" not found Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.698209 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/739c0608-5471-42a6-b062-4355cd1894a0-lock\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.698277 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.698823 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/739c0608-5471-42a6-b062-4355cd1894a0-cache\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.721540 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmsbw\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-kube-api-access-pmsbw\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:30 crc kubenswrapper[4731]: I1129 07:24:30.728770 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.094528 4731 generic.go:334] "Generic (PLEG): container finished" podID="0408017d-3976-46ab-b78d-4116a24c33d9" containerID="6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248" exitCode=0 Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.094630 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" event={"ID":"0408017d-3976-46ab-b78d-4116a24c33d9","Type":"ContainerDied","Data":"6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248"} Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.094680 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.094729 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hln8c" event={"ID":"0408017d-3976-46ab-b78d-4116a24c33d9","Type":"ContainerDied","Data":"60556cf32b76f189d9cded06230ecbfc2e8895885bf87d091c896017a3715c1d"} Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.094763 4731 scope.go:117] "RemoveContainer" containerID="6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248" Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.096955 4731 generic.go:334] "Generic (PLEG): container finished" podID="4716973e-a6ae-4baf-bb88-5436489c5451" containerID="56c28385753d1299ecf570bbcc74b81d67f913e69533a9aa253cf45f3aed2895" exitCode=0 Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.097003 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" event={"ID":"4716973e-a6ae-4baf-bb88-5436489c5451","Type":"ContainerDied","Data":"56c28385753d1299ecf570bbcc74b81d67f913e69533a9aa253cf45f3aed2895"} Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.097032 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" event={"ID":"4716973e-a6ae-4baf-bb88-5436489c5451","Type":"ContainerStarted","Data":"6a150fe7a3025d826430f403b714dd71c126ea975556bc6fbc4a56fe1a902aef"} Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.146274 4731 scope.go:117] "RemoveContainer" containerID="48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05" Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.155891 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hln8c"] Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.161776 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hln8c"] Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.177068 4731 scope.go:117] "RemoveContainer" containerID="6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248" Nov 29 07:24:31 crc kubenswrapper[4731]: E1129 07:24:31.177648 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248\": container with ID starting with 6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248 not found: ID does not exist" containerID="6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248" Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.177696 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248"} err="failed to get container status \"6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248\": rpc error: code = NotFound desc = could not find container \"6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248\": container with ID starting with 6f41043abc4c0caa2e77844ef784242a43dd636983a6c865a600fac9dc4ff248 not found: ID does not exist" Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.177729 4731 scope.go:117] "RemoveContainer" containerID="48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05" Nov 29 07:24:31 crc kubenswrapper[4731]: E1129 07:24:31.178089 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05\": container with ID starting with 48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05 not found: ID does not exist" containerID="48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05" Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.178119 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05"} err="failed to get container status \"48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05\": rpc error: code = NotFound desc = could not find container \"48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05\": container with ID starting with 48af8b622508730929cbf71e820536e37aa7c6a84deb5384766a3206720b0a05 not found: ID does not exist" Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.208766 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:31 crc kubenswrapper[4731]: E1129 07:24:31.209390 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:24:31 crc kubenswrapper[4731]: E1129 07:24:31.209406 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:24:31 crc kubenswrapper[4731]: E1129 07:24:31.209447 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift podName:739c0608-5471-42a6-b062-4355cd1894a0 nodeName:}" failed. No retries permitted until 2025-11-29 07:24:32.209430861 +0000 UTC m=+1111.099791964 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift") pod "swift-storage-0" (UID: "739c0608-5471-42a6-b062-4355cd1894a0") : configmap "swift-ring-files" not found Nov 29 07:24:31 crc kubenswrapper[4731]: I1129 07:24:31.818396 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0408017d-3976-46ab-b78d-4116a24c33d9" path="/var/lib/kubelet/pods/0408017d-3976-46ab-b78d-4116a24c33d9/volumes" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.063065 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.113628 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" event={"ID":"4716973e-a6ae-4baf-bb88-5436489c5451","Type":"ContainerStarted","Data":"28c697f4aec0a9f9c8e62a1117fd7812fd31deb9dbfa342f47843f946a9be410"} Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.115606 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.144872 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" podStartSLOduration=3.14485343 podStartE2EDuration="3.14485343s" podCreationTimestamp="2025-11-29 07:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:32.143360637 +0000 UTC m=+1111.033721760" watchObservedRunningTime="2025-11-29 07:24:32.14485343 +0000 UTC m=+1111.035214533" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.193250 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.231982 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:32 crc kubenswrapper[4731]: E1129 07:24:32.232499 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:24:32 crc kubenswrapper[4731]: E1129 07:24:32.232537 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:24:32 crc kubenswrapper[4731]: E1129 07:24:32.232602 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift podName:739c0608-5471-42a6-b062-4355cd1894a0 nodeName:}" failed. No retries permitted until 2025-11-29 07:24:34.232582493 +0000 UTC m=+1113.122943596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift") pod "swift-storage-0" (UID: "739c0608-5471-42a6-b062-4355cd1894a0") : configmap "swift-ring-files" not found Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.776752 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.822379 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mw9f7"] Nov 29 07:24:32 crc kubenswrapper[4731]: E1129 07:24:32.822967 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0408017d-3976-46ab-b78d-4116a24c33d9" containerName="dnsmasq-dns" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.822988 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0408017d-3976-46ab-b78d-4116a24c33d9" containerName="dnsmasq-dns" Nov 29 07:24:32 crc kubenswrapper[4731]: E1129 07:24:32.823024 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0408017d-3976-46ab-b78d-4116a24c33d9" containerName="init" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.823030 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0408017d-3976-46ab-b78d-4116a24c33d9" containerName="init" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.823225 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0408017d-3976-46ab-b78d-4116a24c33d9" containerName="dnsmasq-dns" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.824065 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.832521 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-573c-account-create-update-85bxc"] Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.834268 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.838434 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.843584 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mw9f7"] Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.843919 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pbc7\" (UniqueName: \"kubernetes.io/projected/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-kube-api-access-7pbc7\") pod \"glance-db-create-mw9f7\" (UID: \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\") " pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.844747 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-operator-scripts\") pod \"glance-db-create-mw9f7\" (UID: \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\") " pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.852552 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-573c-account-create-update-85bxc"] Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.947057 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pbc7\" (UniqueName: \"kubernetes.io/projected/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-kube-api-access-7pbc7\") pod \"glance-db-create-mw9f7\" (UID: \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\") " pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.947175 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b7b8c0-4be6-4417-8834-313b5ca3ff69-operator-scripts\") pod \"glance-573c-account-create-update-85bxc\" (UID: \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\") " pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.947215 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7cnb\" (UniqueName: \"kubernetes.io/projected/52b7b8c0-4be6-4417-8834-313b5ca3ff69-kube-api-access-p7cnb\") pod \"glance-573c-account-create-update-85bxc\" (UID: \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\") " pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.947547 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-operator-scripts\") pod \"glance-db-create-mw9f7\" (UID: \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\") " pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.948287 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-operator-scripts\") pod \"glance-db-create-mw9f7\" (UID: \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\") " pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:32 crc kubenswrapper[4731]: I1129 07:24:32.972831 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pbc7\" (UniqueName: \"kubernetes.io/projected/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-kube-api-access-7pbc7\") pod \"glance-db-create-mw9f7\" (UID: \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\") " pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.002966 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.003331 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.050327 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b7b8c0-4be6-4417-8834-313b5ca3ff69-operator-scripts\") pod \"glance-573c-account-create-update-85bxc\" (UID: \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\") " pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.050829 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7cnb\" (UniqueName: \"kubernetes.io/projected/52b7b8c0-4be6-4417-8834-313b5ca3ff69-kube-api-access-p7cnb\") pod \"glance-573c-account-create-update-85bxc\" (UID: \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\") " pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.051229 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b7b8c0-4be6-4417-8834-313b5ca3ff69-operator-scripts\") pod \"glance-573c-account-create-update-85bxc\" (UID: \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\") " pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.069860 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7cnb\" (UniqueName: \"kubernetes.io/projected/52b7b8c0-4be6-4417-8834-313b5ca3ff69-kube-api-access-p7cnb\") pod \"glance-573c-account-create-update-85bxc\" (UID: \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\") " pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.157527 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.175873 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.693677 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mw9f7"] Nov 29 07:24:33 crc kubenswrapper[4731]: W1129 07:24:33.695312 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b7b8c0_4be6_4417_8834_313b5ca3ff69.slice/crio-2403b35830620ae80050561fb42f45590b3cdfb624ddab0cabdd27dced9863e2 WatchSource:0}: Error finding container 2403b35830620ae80050561fb42f45590b3cdfb624ddab0cabdd27dced9863e2: Status 404 returned error can't find the container with id 2403b35830620ae80050561fb42f45590b3cdfb624ddab0cabdd27dced9863e2 Nov 29 07:24:33 crc kubenswrapper[4731]: I1129 07:24:33.699340 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-573c-account-create-update-85bxc"] Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.136180 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mw9f7" event={"ID":"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a","Type":"ContainerStarted","Data":"3468f997d660ea5df6accff4a33b6e89ff448ee14f89f28c05e26e032fcc4d9f"} Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.136680 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mw9f7" event={"ID":"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a","Type":"ContainerStarted","Data":"6bd71aabbef97355086919092c7595758f5b93051517c3f8303f0e144da76442"} Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.138518 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-573c-account-create-update-85bxc" event={"ID":"52b7b8c0-4be6-4417-8834-313b5ca3ff69","Type":"ContainerStarted","Data":"deb019eeb6ddd2972ff3e90715778ff0b00343c1833c5eb61c401f12bbe0b1dc"} Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.138553 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-573c-account-create-update-85bxc" event={"ID":"52b7b8c0-4be6-4417-8834-313b5ca3ff69","Type":"ContainerStarted","Data":"2403b35830620ae80050561fb42f45590b3cdfb624ddab0cabdd27dced9863e2"} Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.167708 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-mw9f7" podStartSLOduration=2.167679131 podStartE2EDuration="2.167679131s" podCreationTimestamp="2025-11-29 07:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:34.163683194 +0000 UTC m=+1113.054044307" watchObservedRunningTime="2025-11-29 07:24:34.167679131 +0000 UTC m=+1113.058040224" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.192008 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-573c-account-create-update-85bxc" podStartSLOduration=2.19197892 podStartE2EDuration="2.19197892s" podCreationTimestamp="2025-11-29 07:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:34.182186094 +0000 UTC m=+1113.072547217" watchObservedRunningTime="2025-11-29 07:24:34.19197892 +0000 UTC m=+1113.082340023" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.301037 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:34 crc kubenswrapper[4731]: E1129 07:24:34.301289 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:24:34 crc kubenswrapper[4731]: E1129 07:24:34.301329 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:24:34 crc kubenswrapper[4731]: E1129 07:24:34.301407 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift podName:739c0608-5471-42a6-b062-4355cd1894a0 nodeName:}" failed. No retries permitted until 2025-11-29 07:24:38.301382146 +0000 UTC m=+1117.191743249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift") pod "swift-storage-0" (UID: "739c0608-5471-42a6-b062-4355cd1894a0") : configmap "swift-ring-files" not found Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.473244 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-w9lrv"] Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.474482 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.477602 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.482341 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.482893 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.491811 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w9lrv"] Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.606825 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-swiftconf\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.606890 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkkm\" (UniqueName: \"kubernetes.io/projected/38241274-4656-4558-a456-29d74208d47d-kube-api-access-8tkkm\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.606936 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38241274-4656-4558-a456-29d74208d47d-etc-swift\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.607081 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-dispersionconf\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.607265 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-scripts\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.607384 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-ring-data-devices\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.607435 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-combined-ca-bundle\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.709376 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-swiftconf\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.709462 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tkkm\" (UniqueName: \"kubernetes.io/projected/38241274-4656-4558-a456-29d74208d47d-kube-api-access-8tkkm\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.709555 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38241274-4656-4558-a456-29d74208d47d-etc-swift\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.709669 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-dispersionconf\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.709729 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-scripts\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.710315 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-ring-data-devices\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.710406 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-combined-ca-bundle\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.711271 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-scripts\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.711274 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38241274-4656-4558-a456-29d74208d47d-etc-swift\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.712192 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-ring-data-devices\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.718326 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-combined-ca-bundle\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.719471 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-dispersionconf\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.723555 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-swiftconf\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.729008 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tkkm\" (UniqueName: \"kubernetes.io/projected/38241274-4656-4558-a456-29d74208d47d-kube-api-access-8tkkm\") pod \"swift-ring-rebalance-w9lrv\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:34 crc kubenswrapper[4731]: I1129 07:24:34.796015 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:35 crc kubenswrapper[4731]: I1129 07:24:35.151056 4731 generic.go:334] "Generic (PLEG): container finished" podID="52b7b8c0-4be6-4417-8834-313b5ca3ff69" containerID="deb019eeb6ddd2972ff3e90715778ff0b00343c1833c5eb61c401f12bbe0b1dc" exitCode=0 Nov 29 07:24:35 crc kubenswrapper[4731]: I1129 07:24:35.151925 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-573c-account-create-update-85bxc" event={"ID":"52b7b8c0-4be6-4417-8834-313b5ca3ff69","Type":"ContainerDied","Data":"deb019eeb6ddd2972ff3e90715778ff0b00343c1833c5eb61c401f12bbe0b1dc"} Nov 29 07:24:35 crc kubenswrapper[4731]: I1129 07:24:35.154739 4731 generic.go:334] "Generic (PLEG): container finished" podID="3f4ef41c-1edd-4739-8e3e-d6ec21e2923a" containerID="3468f997d660ea5df6accff4a33b6e89ff448ee14f89f28c05e26e032fcc4d9f" exitCode=0 Nov 29 07:24:35 crc kubenswrapper[4731]: I1129 07:24:35.154799 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mw9f7" event={"ID":"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a","Type":"ContainerDied","Data":"3468f997d660ea5df6accff4a33b6e89ff448ee14f89f28c05e26e032fcc4d9f"} Nov 29 07:24:35 crc kubenswrapper[4731]: I1129 07:24:35.293304 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w9lrv"] Nov 29 07:24:35 crc kubenswrapper[4731]: W1129 07:24:35.304857 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38241274_4656_4558_a456_29d74208d47d.slice/crio-10ac3b520e6f66f5823d0090c86bf55579f785ecdc596f2f345a6bbd82007c50 WatchSource:0}: Error finding container 10ac3b520e6f66f5823d0090c86bf55579f785ecdc596f2f345a6bbd82007c50: Status 404 returned error can't find the container with id 10ac3b520e6f66f5823d0090c86bf55579f785ecdc596f2f345a6bbd82007c50 Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.174232 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w9lrv" event={"ID":"38241274-4656-4558-a456-29d74208d47d","Type":"ContainerStarted","Data":"10ac3b520e6f66f5823d0090c86bf55579f785ecdc596f2f345a6bbd82007c50"} Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.647179 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.673043 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.752771 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7cnb\" (UniqueName: \"kubernetes.io/projected/52b7b8c0-4be6-4417-8834-313b5ca3ff69-kube-api-access-p7cnb\") pod \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\" (UID: \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\") " Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.752852 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pbc7\" (UniqueName: \"kubernetes.io/projected/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-kube-api-access-7pbc7\") pod \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\" (UID: \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\") " Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.752938 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-operator-scripts\") pod \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\" (UID: \"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a\") " Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.753121 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b7b8c0-4be6-4417-8834-313b5ca3ff69-operator-scripts\") pod \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\" (UID: \"52b7b8c0-4be6-4417-8834-313b5ca3ff69\") " Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.754432 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f4ef41c-1edd-4739-8e3e-d6ec21e2923a" (UID: "3f4ef41c-1edd-4739-8e3e-d6ec21e2923a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.754980 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b7b8c0-4be6-4417-8834-313b5ca3ff69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52b7b8c0-4be6-4417-8834-313b5ca3ff69" (UID: "52b7b8c0-4be6-4417-8834-313b5ca3ff69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.762142 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b7b8c0-4be6-4417-8834-313b5ca3ff69-kube-api-access-p7cnb" (OuterVolumeSpecName: "kube-api-access-p7cnb") pod "52b7b8c0-4be6-4417-8834-313b5ca3ff69" (UID: "52b7b8c0-4be6-4417-8834-313b5ca3ff69"). InnerVolumeSpecName "kube-api-access-p7cnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.778546 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-kube-api-access-7pbc7" (OuterVolumeSpecName: "kube-api-access-7pbc7") pod "3f4ef41c-1edd-4739-8e3e-d6ec21e2923a" (UID: "3f4ef41c-1edd-4739-8e3e-d6ec21e2923a"). InnerVolumeSpecName "kube-api-access-7pbc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.855865 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b7b8c0-4be6-4417-8834-313b5ca3ff69-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.855910 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7cnb\" (UniqueName: \"kubernetes.io/projected/52b7b8c0-4be6-4417-8834-313b5ca3ff69-kube-api-access-p7cnb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.855925 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pbc7\" (UniqueName: \"kubernetes.io/projected/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-kube-api-access-7pbc7\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:36 crc kubenswrapper[4731]: I1129 07:24:36.855939 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.039746 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7xszw"] Nov 29 07:24:37 crc kubenswrapper[4731]: E1129 07:24:37.040684 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b7b8c0-4be6-4417-8834-313b5ca3ff69" containerName="mariadb-account-create-update" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.040714 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b7b8c0-4be6-4417-8834-313b5ca3ff69" containerName="mariadb-account-create-update" Nov 29 07:24:37 crc kubenswrapper[4731]: E1129 07:24:37.040745 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4ef41c-1edd-4739-8e3e-d6ec21e2923a" containerName="mariadb-database-create" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.040756 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4ef41c-1edd-4739-8e3e-d6ec21e2923a" containerName="mariadb-database-create" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.041028 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4ef41c-1edd-4739-8e3e-d6ec21e2923a" containerName="mariadb-database-create" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.041054 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b7b8c0-4be6-4417-8834-313b5ca3ff69" containerName="mariadb-account-create-update" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.042201 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.051086 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7xszw"] Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.163335 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7897925-7f61-47ea-b746-185a41fb854d-operator-scripts\") pod \"keystone-db-create-7xszw\" (UID: \"a7897925-7f61-47ea-b746-185a41fb854d\") " pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.163408 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzn82\" (UniqueName: \"kubernetes.io/projected/a7897925-7f61-47ea-b746-185a41fb854d-kube-api-access-bzn82\") pod \"keystone-db-create-7xszw\" (UID: \"a7897925-7f61-47ea-b746-185a41fb854d\") " pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.172465 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8f6e-account-create-update-nl2hh"] Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.173916 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.176702 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.214371 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mw9f7" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.224819 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8f6e-account-create-update-nl2hh"] Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.224911 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mw9f7" event={"ID":"3f4ef41c-1edd-4739-8e3e-d6ec21e2923a","Type":"ContainerDied","Data":"6bd71aabbef97355086919092c7595758f5b93051517c3f8303f0e144da76442"} Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.224955 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd71aabbef97355086919092c7595758f5b93051517c3f8303f0e144da76442" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.228208 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-573c-account-create-update-85bxc" event={"ID":"52b7b8c0-4be6-4417-8834-313b5ca3ff69","Type":"ContainerDied","Data":"2403b35830620ae80050561fb42f45590b3cdfb624ddab0cabdd27dced9863e2"} Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.228260 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2403b35830620ae80050561fb42f45590b3cdfb624ddab0cabdd27dced9863e2" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.228381 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-573c-account-create-update-85bxc" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.265534 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlt2\" (UniqueName: \"kubernetes.io/projected/9acf0722-ab53-422c-8835-64a8615ad4e6-kube-api-access-9qlt2\") pod \"keystone-8f6e-account-create-update-nl2hh\" (UID: \"9acf0722-ab53-422c-8835-64a8615ad4e6\") " pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.265779 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7897925-7f61-47ea-b746-185a41fb854d-operator-scripts\") pod \"keystone-db-create-7xszw\" (UID: \"a7897925-7f61-47ea-b746-185a41fb854d\") " pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.265816 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzn82\" (UniqueName: \"kubernetes.io/projected/a7897925-7f61-47ea-b746-185a41fb854d-kube-api-access-bzn82\") pod \"keystone-db-create-7xszw\" (UID: \"a7897925-7f61-47ea-b746-185a41fb854d\") " pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.265907 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9acf0722-ab53-422c-8835-64a8615ad4e6-operator-scripts\") pod \"keystone-8f6e-account-create-update-nl2hh\" (UID: \"9acf0722-ab53-422c-8835-64a8615ad4e6\") " pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.266995 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7897925-7f61-47ea-b746-185a41fb854d-operator-scripts\") pod \"keystone-db-create-7xszw\" (UID: \"a7897925-7f61-47ea-b746-185a41fb854d\") " pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.285128 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzn82\" (UniqueName: \"kubernetes.io/projected/a7897925-7f61-47ea-b746-185a41fb854d-kube-api-access-bzn82\") pod \"keystone-db-create-7xszw\" (UID: \"a7897925-7f61-47ea-b746-185a41fb854d\") " pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.333304 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x2fsv"] Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.335144 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.351704 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x2fsv"] Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.375145 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.382719 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qlt2\" (UniqueName: \"kubernetes.io/projected/9acf0722-ab53-422c-8835-64a8615ad4e6-kube-api-access-9qlt2\") pod \"keystone-8f6e-account-create-update-nl2hh\" (UID: \"9acf0722-ab53-422c-8835-64a8615ad4e6\") " pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.382936 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnsdr\" (UniqueName: \"kubernetes.io/projected/c6701aa7-6736-40e6-aaf2-195fcf43c455-kube-api-access-vnsdr\") pod \"placement-db-create-x2fsv\" (UID: \"c6701aa7-6736-40e6-aaf2-195fcf43c455\") " pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.382989 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6701aa7-6736-40e6-aaf2-195fcf43c455-operator-scripts\") pod \"placement-db-create-x2fsv\" (UID: \"c6701aa7-6736-40e6-aaf2-195fcf43c455\") " pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.383086 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9acf0722-ab53-422c-8835-64a8615ad4e6-operator-scripts\") pod \"keystone-8f6e-account-create-update-nl2hh\" (UID: \"9acf0722-ab53-422c-8835-64a8615ad4e6\") " pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.386031 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9acf0722-ab53-422c-8835-64a8615ad4e6-operator-scripts\") pod \"keystone-8f6e-account-create-update-nl2hh\" (UID: \"9acf0722-ab53-422c-8835-64a8615ad4e6\") " pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.401680 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qlt2\" (UniqueName: \"kubernetes.io/projected/9acf0722-ab53-422c-8835-64a8615ad4e6-kube-api-access-9qlt2\") pod \"keystone-8f6e-account-create-update-nl2hh\" (UID: \"9acf0722-ab53-422c-8835-64a8615ad4e6\") " pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.469415 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c061-account-create-update-6d6qj"] Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.470684 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.473431 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.476454 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c061-account-create-update-6d6qj"] Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.484496 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnsdr\" (UniqueName: \"kubernetes.io/projected/c6701aa7-6736-40e6-aaf2-195fcf43c455-kube-api-access-vnsdr\") pod \"placement-db-create-x2fsv\" (UID: \"c6701aa7-6736-40e6-aaf2-195fcf43c455\") " pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.484546 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6701aa7-6736-40e6-aaf2-195fcf43c455-operator-scripts\") pod \"placement-db-create-x2fsv\" (UID: \"c6701aa7-6736-40e6-aaf2-195fcf43c455\") " pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.485395 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6701aa7-6736-40e6-aaf2-195fcf43c455-operator-scripts\") pod \"placement-db-create-x2fsv\" (UID: \"c6701aa7-6736-40e6-aaf2-195fcf43c455\") " pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.507035 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnsdr\" (UniqueName: \"kubernetes.io/projected/c6701aa7-6736-40e6-aaf2-195fcf43c455-kube-api-access-vnsdr\") pod \"placement-db-create-x2fsv\" (UID: \"c6701aa7-6736-40e6-aaf2-195fcf43c455\") " pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.507422 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.585984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-operator-scripts\") pod \"placement-c061-account-create-update-6d6qj\" (UID: \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\") " pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.586055 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6ff\" (UniqueName: \"kubernetes.io/projected/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-kube-api-access-ph6ff\") pod \"placement-c061-account-create-update-6d6qj\" (UID: \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\") " pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.678933 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.688358 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-operator-scripts\") pod \"placement-c061-account-create-update-6d6qj\" (UID: \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\") " pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.688438 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph6ff\" (UniqueName: \"kubernetes.io/projected/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-kube-api-access-ph6ff\") pod \"placement-c061-account-create-update-6d6qj\" (UID: \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\") " pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.689712 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-operator-scripts\") pod \"placement-c061-account-create-update-6d6qj\" (UID: \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\") " pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.708765 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph6ff\" (UniqueName: \"kubernetes.io/projected/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-kube-api-access-ph6ff\") pod \"placement-c061-account-create-update-6d6qj\" (UID: \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\") " pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.792787 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.966862 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8w7f8"] Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.969824 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.974497 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n6x8c" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.974977 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 29 07:24:37 crc kubenswrapper[4731]: I1129 07:24:37.981595 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8w7f8"] Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.094631 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-config-data\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.094675 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkb5x\" (UniqueName: \"kubernetes.io/projected/6986d025-7080-457e-b2ce-88d8ae965c70-kube-api-access-kkb5x\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.094719 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-combined-ca-bundle\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.095023 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-db-sync-config-data\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.197111 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-config-data\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.197428 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkb5x\" (UniqueName: \"kubernetes.io/projected/6986d025-7080-457e-b2ce-88d8ae965c70-kube-api-access-kkb5x\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.197680 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-combined-ca-bundle\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.197970 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-db-sync-config-data\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.202175 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-db-sync-config-data\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.202724 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-config-data\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.218980 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-combined-ca-bundle\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.219049 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkb5x\" (UniqueName: \"kubernetes.io/projected/6986d025-7080-457e-b2ce-88d8ae965c70-kube-api-access-kkb5x\") pod \"glance-db-sync-8w7f8\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.313357 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8w7f8" Nov 29 07:24:38 crc kubenswrapper[4731]: I1129 07:24:38.401211 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:38 crc kubenswrapper[4731]: E1129 07:24:38.401504 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:24:38 crc kubenswrapper[4731]: E1129 07:24:38.401524 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:24:38 crc kubenswrapper[4731]: E1129 07:24:38.401590 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift podName:739c0608-5471-42a6-b062-4355cd1894a0 nodeName:}" failed. No retries permitted until 2025-11-29 07:24:46.401555987 +0000 UTC m=+1125.291917090 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift") pod "swift-storage-0" (UID: "739c0608-5471-42a6-b062-4355cd1894a0") : configmap "swift-ring-files" not found Nov 29 07:24:39 crc kubenswrapper[4731]: I1129 07:24:39.497881 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7xszw"] Nov 29 07:24:39 crc kubenswrapper[4731]: W1129 07:24:39.502390 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7897925_7f61_47ea_b746_185a41fb854d.slice/crio-dad5bb84cf9e664cada1b8474aa6b1f0e2e85a5fbc1312f380f751fb0feb725b WatchSource:0}: Error finding container dad5bb84cf9e664cada1b8474aa6b1f0e2e85a5fbc1312f380f751fb0feb725b: Status 404 returned error can't find the container with id dad5bb84cf9e664cada1b8474aa6b1f0e2e85a5fbc1312f380f751fb0feb725b Nov 29 07:24:39 crc kubenswrapper[4731]: I1129 07:24:39.595103 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x2fsv"] Nov 29 07:24:39 crc kubenswrapper[4731]: W1129 07:24:39.604377 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0453cbcb_48ae_47ee_9a97_e9b4ab7da604.slice/crio-ab8cc5a2b74fb45f918c9746fee04c9a4b4fa8a120bc794d1bdffb9997666810 WatchSource:0}: Error finding container ab8cc5a2b74fb45f918c9746fee04c9a4b4fa8a120bc794d1bdffb9997666810: Status 404 returned error can't find the container with id ab8cc5a2b74fb45f918c9746fee04c9a4b4fa8a120bc794d1bdffb9997666810 Nov 29 07:24:39 crc kubenswrapper[4731]: I1129 07:24:39.613705 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c061-account-create-update-6d6qj"] Nov 29 07:24:39 crc kubenswrapper[4731]: I1129 07:24:39.623782 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8f6e-account-create-update-nl2hh"] Nov 29 07:24:39 crc kubenswrapper[4731]: I1129 07:24:39.796905 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:24:39 crc kubenswrapper[4731]: I1129 07:24:39.871454 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t7xmj"] Nov 29 07:24:39 crc kubenswrapper[4731]: I1129 07:24:39.871712 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-t7xmj" podUID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" containerName="dnsmasq-dns" containerID="cri-o://a1f7519aa5fffd1e311b44a94c840d3bd3832965c5d81114fd134864f149131b" gracePeriod=10 Nov 29 07:24:39 crc kubenswrapper[4731]: I1129 07:24:39.928206 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8w7f8"] Nov 29 07:24:39 crc kubenswrapper[4731]: W1129 07:24:39.947465 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6986d025_7080_457e_b2ce_88d8ae965c70.slice/crio-c76f8dcb4c2ebf7335717db1dcc32daea147cbf7d7483d2c6d1788dea7c0c6c1 WatchSource:0}: Error finding container c76f8dcb4c2ebf7335717db1dcc32daea147cbf7d7483d2c6d1788dea7c0c6c1: Status 404 returned error can't find the container with id c76f8dcb4c2ebf7335717db1dcc32daea147cbf7d7483d2c6d1788dea7c0c6c1 Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.260336 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8w7f8" event={"ID":"6986d025-7080-457e-b2ce-88d8ae965c70","Type":"ContainerStarted","Data":"c76f8dcb4c2ebf7335717db1dcc32daea147cbf7d7483d2c6d1788dea7c0c6c1"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.263312 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w9lrv" event={"ID":"38241274-4656-4558-a456-29d74208d47d","Type":"ContainerStarted","Data":"92b0fd459e63132fed618d85200ffeff7c5f359e730f5841f6819240d33f0468"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.269907 4731 generic.go:334] "Generic (PLEG): container finished" podID="0453cbcb-48ae-47ee-9a97-e9b4ab7da604" containerID="794cfd49569c4f1c58ed728cf18001d4a59cb2d7d42adc0e4ff2645d03b41421" exitCode=0 Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.270009 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c061-account-create-update-6d6qj" event={"ID":"0453cbcb-48ae-47ee-9a97-e9b4ab7da604","Type":"ContainerDied","Data":"794cfd49569c4f1c58ed728cf18001d4a59cb2d7d42adc0e4ff2645d03b41421"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.270037 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c061-account-create-update-6d6qj" event={"ID":"0453cbcb-48ae-47ee-9a97-e9b4ab7da604","Type":"ContainerStarted","Data":"ab8cc5a2b74fb45f918c9746fee04c9a4b4fa8a120bc794d1bdffb9997666810"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.271745 4731 generic.go:334] "Generic (PLEG): container finished" podID="a7897925-7f61-47ea-b746-185a41fb854d" containerID="099b944ac64398ce6681bb304100e17570bba32c94c5db5f6727a5589a88b1b5" exitCode=0 Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.271800 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7xszw" event={"ID":"a7897925-7f61-47ea-b746-185a41fb854d","Type":"ContainerDied","Data":"099b944ac64398ce6681bb304100e17570bba32c94c5db5f6727a5589a88b1b5"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.271825 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7xszw" event={"ID":"a7897925-7f61-47ea-b746-185a41fb854d","Type":"ContainerStarted","Data":"dad5bb84cf9e664cada1b8474aa6b1f0e2e85a5fbc1312f380f751fb0feb725b"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.272764 4731 generic.go:334] "Generic (PLEG): container finished" podID="9acf0722-ab53-422c-8835-64a8615ad4e6" containerID="36c2927af8a4f5c9819180a3824e1ff07f85d80a54ea662674351f2aef39604b" exitCode=0 Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.272799 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8f6e-account-create-update-nl2hh" event={"ID":"9acf0722-ab53-422c-8835-64a8615ad4e6","Type":"ContainerDied","Data":"36c2927af8a4f5c9819180a3824e1ff07f85d80a54ea662674351f2aef39604b"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.272813 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8f6e-account-create-update-nl2hh" event={"ID":"9acf0722-ab53-422c-8835-64a8615ad4e6","Type":"ContainerStarted","Data":"e6c91f3e43928bab8289619b1f71c3012a842d28b2f50b5315971fc60221f94b"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.274867 4731 generic.go:334] "Generic (PLEG): container finished" podID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" containerID="a1f7519aa5fffd1e311b44a94c840d3bd3832965c5d81114fd134864f149131b" exitCode=0 Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.274925 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t7xmj" event={"ID":"6be24ad5-a68a-41a3-8622-6b9bc69d4943","Type":"ContainerDied","Data":"a1f7519aa5fffd1e311b44a94c840d3bd3832965c5d81114fd134864f149131b"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.275944 4731 generic.go:334] "Generic (PLEG): container finished" podID="c6701aa7-6736-40e6-aaf2-195fcf43c455" containerID="e9e1c34e8a156b051cbc319b181be92b2048c71c8e97be6d705bb890b58a4f00" exitCode=0 Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.275974 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x2fsv" event={"ID":"c6701aa7-6736-40e6-aaf2-195fcf43c455","Type":"ContainerDied","Data":"e9e1c34e8a156b051cbc319b181be92b2048c71c8e97be6d705bb890b58a4f00"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.275992 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x2fsv" event={"ID":"c6701aa7-6736-40e6-aaf2-195fcf43c455","Type":"ContainerStarted","Data":"64e51c8360ff4bdb331c901af045f6e3296618ed0d77ad08ece074a41b556d46"} Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.292808 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-w9lrv" podStartSLOduration=2.586503046 podStartE2EDuration="6.292788363s" podCreationTimestamp="2025-11-29 07:24:34 +0000 UTC" firstStartedPulling="2025-11-29 07:24:35.307690786 +0000 UTC m=+1114.198051889" lastFinishedPulling="2025-11-29 07:24:39.013976083 +0000 UTC m=+1117.904337206" observedRunningTime="2025-11-29 07:24:40.288268591 +0000 UTC m=+1119.178629694" watchObservedRunningTime="2025-11-29 07:24:40.292788363 +0000 UTC m=+1119.183149466" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.407710 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.457307 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-nb\") pod \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.457493 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-config\") pod \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.457583 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-sb\") pod \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.457622 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phcnj\" (UniqueName: \"kubernetes.io/projected/6be24ad5-a68a-41a3-8622-6b9bc69d4943-kube-api-access-phcnj\") pod \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.457717 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-dns-svc\") pod \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\" (UID: \"6be24ad5-a68a-41a3-8622-6b9bc69d4943\") " Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.465375 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be24ad5-a68a-41a3-8622-6b9bc69d4943-kube-api-access-phcnj" (OuterVolumeSpecName: "kube-api-access-phcnj") pod "6be24ad5-a68a-41a3-8622-6b9bc69d4943" (UID: "6be24ad5-a68a-41a3-8622-6b9bc69d4943"). InnerVolumeSpecName "kube-api-access-phcnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.505848 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6be24ad5-a68a-41a3-8622-6b9bc69d4943" (UID: "6be24ad5-a68a-41a3-8622-6b9bc69d4943"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.506446 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6be24ad5-a68a-41a3-8622-6b9bc69d4943" (UID: "6be24ad5-a68a-41a3-8622-6b9bc69d4943"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.508240 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6be24ad5-a68a-41a3-8622-6b9bc69d4943" (UID: "6be24ad5-a68a-41a3-8622-6b9bc69d4943"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.509459 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-config" (OuterVolumeSpecName: "config") pod "6be24ad5-a68a-41a3-8622-6b9bc69d4943" (UID: "6be24ad5-a68a-41a3-8622-6b9bc69d4943"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.560154 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.560558 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phcnj\" (UniqueName: \"kubernetes.io/projected/6be24ad5-a68a-41a3-8622-6b9bc69d4943-kube-api-access-phcnj\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.560700 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.560781 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:40 crc kubenswrapper[4731]: I1129 07:24:40.560901 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be24ad5-a68a-41a3-8622-6b9bc69d4943-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.295479 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t7xmj" event={"ID":"6be24ad5-a68a-41a3-8622-6b9bc69d4943","Type":"ContainerDied","Data":"4ecaa6423d0ef58ffe1926366a56bca026f7dd021e3822c94853c32394133b08"} Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.295581 4731 scope.go:117] "RemoveContainer" containerID="a1f7519aa5fffd1e311b44a94c840d3bd3832965c5d81114fd134864f149131b" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.295733 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-t7xmj" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.347449 4731 scope.go:117] "RemoveContainer" containerID="d2427ceb8ecf7c1a5539267040bd6fbbbcb78aedfc22efcecd34edaf0cded315" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.359684 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t7xmj"] Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.372213 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t7xmj"] Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.766232 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.830209 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" path="/var/lib/kubelet/pods/6be24ad5-a68a-41a3-8622-6b9bc69d4943/volumes" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.893664 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7897925-7f61-47ea-b746-185a41fb854d-operator-scripts\") pod \"a7897925-7f61-47ea-b746-185a41fb854d\" (UID: \"a7897925-7f61-47ea-b746-185a41fb854d\") " Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.893769 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzn82\" (UniqueName: \"kubernetes.io/projected/a7897925-7f61-47ea-b746-185a41fb854d-kube-api-access-bzn82\") pod \"a7897925-7f61-47ea-b746-185a41fb854d\" (UID: \"a7897925-7f61-47ea-b746-185a41fb854d\") " Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.894185 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7897925-7f61-47ea-b746-185a41fb854d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7897925-7f61-47ea-b746-185a41fb854d" (UID: "a7897925-7f61-47ea-b746-185a41fb854d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.901763 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7897925-7f61-47ea-b746-185a41fb854d-kube-api-access-bzn82" (OuterVolumeSpecName: "kube-api-access-bzn82") pod "a7897925-7f61-47ea-b746-185a41fb854d" (UID: "a7897925-7f61-47ea-b746-185a41fb854d"). InnerVolumeSpecName "kube-api-access-bzn82". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.996245 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7897925-7f61-47ea-b746-185a41fb854d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:41 crc kubenswrapper[4731]: I1129 07:24:41.996702 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzn82\" (UniqueName: \"kubernetes.io/projected/a7897925-7f61-47ea-b746-185a41fb854d-kube-api-access-bzn82\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.000405 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.006853 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.013926 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.097853 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6701aa7-6736-40e6-aaf2-195fcf43c455-operator-scripts\") pod \"c6701aa7-6736-40e6-aaf2-195fcf43c455\" (UID: \"c6701aa7-6736-40e6-aaf2-195fcf43c455\") " Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.097907 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph6ff\" (UniqueName: \"kubernetes.io/projected/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-kube-api-access-ph6ff\") pod \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\" (UID: \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\") " Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.098024 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9acf0722-ab53-422c-8835-64a8615ad4e6-operator-scripts\") pod \"9acf0722-ab53-422c-8835-64a8615ad4e6\" (UID: \"9acf0722-ab53-422c-8835-64a8615ad4e6\") " Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.098086 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnsdr\" (UniqueName: \"kubernetes.io/projected/c6701aa7-6736-40e6-aaf2-195fcf43c455-kube-api-access-vnsdr\") pod \"c6701aa7-6736-40e6-aaf2-195fcf43c455\" (UID: \"c6701aa7-6736-40e6-aaf2-195fcf43c455\") " Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.098162 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qlt2\" (UniqueName: \"kubernetes.io/projected/9acf0722-ab53-422c-8835-64a8615ad4e6-kube-api-access-9qlt2\") pod \"9acf0722-ab53-422c-8835-64a8615ad4e6\" (UID: \"9acf0722-ab53-422c-8835-64a8615ad4e6\") " Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.098194 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-operator-scripts\") pod \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\" (UID: \"0453cbcb-48ae-47ee-9a97-e9b4ab7da604\") " Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.099241 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9acf0722-ab53-422c-8835-64a8615ad4e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9acf0722-ab53-422c-8835-64a8615ad4e6" (UID: "9acf0722-ab53-422c-8835-64a8615ad4e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.099273 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0453cbcb-48ae-47ee-9a97-e9b4ab7da604" (UID: "0453cbcb-48ae-47ee-9a97-e9b4ab7da604"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.099642 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6701aa7-6736-40e6-aaf2-195fcf43c455-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6701aa7-6736-40e6-aaf2-195fcf43c455" (UID: "c6701aa7-6736-40e6-aaf2-195fcf43c455"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.102388 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6701aa7-6736-40e6-aaf2-195fcf43c455-kube-api-access-vnsdr" (OuterVolumeSpecName: "kube-api-access-vnsdr") pod "c6701aa7-6736-40e6-aaf2-195fcf43c455" (UID: "c6701aa7-6736-40e6-aaf2-195fcf43c455"). InnerVolumeSpecName "kube-api-access-vnsdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.102880 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9acf0722-ab53-422c-8835-64a8615ad4e6-kube-api-access-9qlt2" (OuterVolumeSpecName: "kube-api-access-9qlt2") pod "9acf0722-ab53-422c-8835-64a8615ad4e6" (UID: "9acf0722-ab53-422c-8835-64a8615ad4e6"). InnerVolumeSpecName "kube-api-access-9qlt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.105146 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-kube-api-access-ph6ff" (OuterVolumeSpecName: "kube-api-access-ph6ff") pod "0453cbcb-48ae-47ee-9a97-e9b4ab7da604" (UID: "0453cbcb-48ae-47ee-9a97-e9b4ab7da604"). InnerVolumeSpecName "kube-api-access-ph6ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.200702 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qlt2\" (UniqueName: \"kubernetes.io/projected/9acf0722-ab53-422c-8835-64a8615ad4e6-kube-api-access-9qlt2\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.200749 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.200760 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6701aa7-6736-40e6-aaf2-195fcf43c455-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.200768 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph6ff\" (UniqueName: \"kubernetes.io/projected/0453cbcb-48ae-47ee-9a97-e9b4ab7da604-kube-api-access-ph6ff\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.200778 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9acf0722-ab53-422c-8835-64a8615ad4e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.200789 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnsdr\" (UniqueName: \"kubernetes.io/projected/c6701aa7-6736-40e6-aaf2-195fcf43c455-kube-api-access-vnsdr\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.305288 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c061-account-create-update-6d6qj" event={"ID":"0453cbcb-48ae-47ee-9a97-e9b4ab7da604","Type":"ContainerDied","Data":"ab8cc5a2b74fb45f918c9746fee04c9a4b4fa8a120bc794d1bdffb9997666810"} Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.305332 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab8cc5a2b74fb45f918c9746fee04c9a4b4fa8a120bc794d1bdffb9997666810" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.305364 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c061-account-create-update-6d6qj" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.306614 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7xszw" event={"ID":"a7897925-7f61-47ea-b746-185a41fb854d","Type":"ContainerDied","Data":"dad5bb84cf9e664cada1b8474aa6b1f0e2e85a5fbc1312f380f751fb0feb725b"} Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.306638 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad5bb84cf9e664cada1b8474aa6b1f0e2e85a5fbc1312f380f751fb0feb725b" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.306703 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7xszw" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.319524 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8f6e-account-create-update-nl2hh" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.319887 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8f6e-account-create-update-nl2hh" event={"ID":"9acf0722-ab53-422c-8835-64a8615ad4e6","Type":"ContainerDied","Data":"e6c91f3e43928bab8289619b1f71c3012a842d28b2f50b5315971fc60221f94b"} Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.321208 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c91f3e43928bab8289619b1f71c3012a842d28b2f50b5315971fc60221f94b" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.322549 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x2fsv" event={"ID":"c6701aa7-6736-40e6-aaf2-195fcf43c455","Type":"ContainerDied","Data":"64e51c8360ff4bdb331c901af045f6e3296618ed0d77ad08ece074a41b556d46"} Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.322606 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64e51c8360ff4bdb331c901af045f6e3296618ed0d77ad08ece074a41b556d46" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.322657 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x2fsv" Nov 29 07:24:42 crc kubenswrapper[4731]: I1129 07:24:42.508974 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 29 07:24:46 crc kubenswrapper[4731]: I1129 07:24:46.470821 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:24:46 crc kubenswrapper[4731]: E1129 07:24:46.471170 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:24:46 crc kubenswrapper[4731]: E1129 07:24:46.471576 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:24:46 crc kubenswrapper[4731]: E1129 07:24:46.471678 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift podName:739c0608-5471-42a6-b062-4355cd1894a0 nodeName:}" failed. No retries permitted until 2025-11-29 07:25:02.471646685 +0000 UTC m=+1141.362007788 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift") pod "swift-storage-0" (UID: "739c0608-5471-42a6-b062-4355cd1894a0") : configmap "swift-ring-files" not found Nov 29 07:24:47 crc kubenswrapper[4731]: I1129 07:24:47.988316 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hdf9m" podUID="3e584c0b-7ce0-45b8-b6a9-60ee16752970" containerName="ovn-controller" probeResult="failure" output=< Nov 29 07:24:47 crc kubenswrapper[4731]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 07:24:47 crc kubenswrapper[4731]: > Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.034880 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.050511 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-slgbx" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379061 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hdf9m-config-bxcdh"] Nov 29 07:24:48 crc kubenswrapper[4731]: E1129 07:24:48.379439 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7897925-7f61-47ea-b746-185a41fb854d" containerName="mariadb-database-create" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379466 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7897925-7f61-47ea-b746-185a41fb854d" containerName="mariadb-database-create" Nov 29 07:24:48 crc kubenswrapper[4731]: E1129 07:24:48.379493 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9acf0722-ab53-422c-8835-64a8615ad4e6" containerName="mariadb-account-create-update" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379499 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acf0722-ab53-422c-8835-64a8615ad4e6" containerName="mariadb-account-create-update" Nov 29 07:24:48 crc kubenswrapper[4731]: E1129 07:24:48.379513 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" containerName="init" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379520 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" containerName="init" Nov 29 07:24:48 crc kubenswrapper[4731]: E1129 07:24:48.379538 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" containerName="dnsmasq-dns" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379546 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" containerName="dnsmasq-dns" Nov 29 07:24:48 crc kubenswrapper[4731]: E1129 07:24:48.379553 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0453cbcb-48ae-47ee-9a97-e9b4ab7da604" containerName="mariadb-account-create-update" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379559 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0453cbcb-48ae-47ee-9a97-e9b4ab7da604" containerName="mariadb-account-create-update" Nov 29 07:24:48 crc kubenswrapper[4731]: E1129 07:24:48.379586 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6701aa7-6736-40e6-aaf2-195fcf43c455" containerName="mariadb-database-create" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379592 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6701aa7-6736-40e6-aaf2-195fcf43c455" containerName="mariadb-database-create" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379770 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0453cbcb-48ae-47ee-9a97-e9b4ab7da604" containerName="mariadb-account-create-update" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379787 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7897925-7f61-47ea-b746-185a41fb854d" containerName="mariadb-database-create" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379798 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6701aa7-6736-40e6-aaf2-195fcf43c455" containerName="mariadb-database-create" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379808 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be24ad5-a68a-41a3-8622-6b9bc69d4943" containerName="dnsmasq-dns" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.379818 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9acf0722-ab53-422c-8835-64a8615ad4e6" containerName="mariadb-account-create-update" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.380457 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.385830 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.422635 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdf9m-config-bxcdh"] Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.521534 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-additional-scripts\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.521621 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-scripts\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.521725 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.521750 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-log-ovn\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.521770 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfnjq\" (UniqueName: \"kubernetes.io/projected/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-kube-api-access-cfnjq\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.521809 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run-ovn\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.623212 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-additional-scripts\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.623284 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-scripts\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.623363 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.623386 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-log-ovn\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.623408 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfnjq\" (UniqueName: \"kubernetes.io/projected/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-kube-api-access-cfnjq\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.623457 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run-ovn\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.623903 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run-ovn\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.624792 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.624830 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-additional-scripts\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.624896 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-log-ovn\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.626792 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-scripts\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.653433 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfnjq\" (UniqueName: \"kubernetes.io/projected/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-kube-api-access-cfnjq\") pod \"ovn-controller-hdf9m-config-bxcdh\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:48 crc kubenswrapper[4731]: I1129 07:24:48.724376 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:49 crc kubenswrapper[4731]: I1129 07:24:49.402778 4731 generic.go:334] "Generic (PLEG): container finished" podID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerID="f40118db8ab07db8de5595473f72aed1dea64c65ae58bf29725a18caee3c64bc" exitCode=0 Nov 29 07:24:49 crc kubenswrapper[4731]: I1129 07:24:49.402856 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7971e0f-0e23-4782-9766-4841f04ac1e7","Type":"ContainerDied","Data":"f40118db8ab07db8de5595473f72aed1dea64c65ae58bf29725a18caee3c64bc"} Nov 29 07:24:49 crc kubenswrapper[4731]: I1129 07:24:49.416964 4731 generic.go:334] "Generic (PLEG): container finished" podID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerID="0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4" exitCode=0 Nov 29 07:24:49 crc kubenswrapper[4731]: I1129 07:24:49.417053 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff2928d9-150f-4305-a1bd-6a87ee7b40cc","Type":"ContainerDied","Data":"0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4"} Nov 29 07:24:49 crc kubenswrapper[4731]: I1129 07:24:49.421490 4731 generic.go:334] "Generic (PLEG): container finished" podID="38241274-4656-4558-a456-29d74208d47d" containerID="92b0fd459e63132fed618d85200ffeff7c5f359e730f5841f6819240d33f0468" exitCode=0 Nov 29 07:24:49 crc kubenswrapper[4731]: I1129 07:24:49.421583 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w9lrv" event={"ID":"38241274-4656-4558-a456-29d74208d47d","Type":"ContainerDied","Data":"92b0fd459e63132fed618d85200ffeff7c5f359e730f5841f6819240d33f0468"} Nov 29 07:24:52 crc kubenswrapper[4731]: I1129 07:24:52.985405 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hdf9m" podUID="3e584c0b-7ce0-45b8-b6a9-60ee16752970" containerName="ovn-controller" probeResult="failure" output=< Nov 29 07:24:52 crc kubenswrapper[4731]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 07:24:52 crc kubenswrapper[4731]: > Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.367719 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.444976 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-combined-ca-bundle\") pod \"38241274-4656-4558-a456-29d74208d47d\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.445066 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38241274-4656-4558-a456-29d74208d47d-etc-swift\") pod \"38241274-4656-4558-a456-29d74208d47d\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.446304 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-scripts\") pod \"38241274-4656-4558-a456-29d74208d47d\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.446374 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-dispersionconf\") pod \"38241274-4656-4558-a456-29d74208d47d\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.446504 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-swiftconf\") pod \"38241274-4656-4558-a456-29d74208d47d\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.446601 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-ring-data-devices\") pod \"38241274-4656-4558-a456-29d74208d47d\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.446672 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tkkm\" (UniqueName: \"kubernetes.io/projected/38241274-4656-4558-a456-29d74208d47d-kube-api-access-8tkkm\") pod \"38241274-4656-4558-a456-29d74208d47d\" (UID: \"38241274-4656-4558-a456-29d74208d47d\") " Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.447751 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "38241274-4656-4558-a456-29d74208d47d" (UID: "38241274-4656-4558-a456-29d74208d47d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.448530 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38241274-4656-4558-a456-29d74208d47d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "38241274-4656-4558-a456-29d74208d47d" (UID: "38241274-4656-4558-a456-29d74208d47d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.455134 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "38241274-4656-4558-a456-29d74208d47d" (UID: "38241274-4656-4558-a456-29d74208d47d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.467818 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38241274-4656-4558-a456-29d74208d47d-kube-api-access-8tkkm" (OuterVolumeSpecName: "kube-api-access-8tkkm") pod "38241274-4656-4558-a456-29d74208d47d" (UID: "38241274-4656-4558-a456-29d74208d47d"). InnerVolumeSpecName "kube-api-access-8tkkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.475030 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-scripts" (OuterVolumeSpecName: "scripts") pod "38241274-4656-4558-a456-29d74208d47d" (UID: "38241274-4656-4558-a456-29d74208d47d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.478671 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "38241274-4656-4558-a456-29d74208d47d" (UID: "38241274-4656-4558-a456-29d74208d47d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.480798 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38241274-4656-4558-a456-29d74208d47d" (UID: "38241274-4656-4558-a456-29d74208d47d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.484873 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7971e0f-0e23-4782-9766-4841f04ac1e7","Type":"ContainerStarted","Data":"10f29ddabb4a1ac08fdc4c893d847b076f8ee7d953330eecd5af7042855d069e"} Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.491232 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w9lrv" event={"ID":"38241274-4656-4558-a456-29d74208d47d","Type":"ContainerDied","Data":"10ac3b520e6f66f5823d0090c86bf55579f785ecdc596f2f345a6bbd82007c50"} Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.491269 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10ac3b520e6f66f5823d0090c86bf55579f785ecdc596f2f345a6bbd82007c50" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.491321 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w9lrv" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.549137 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.549210 4731 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38241274-4656-4558-a456-29d74208d47d-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.549221 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.549232 4731 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.549244 4731 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38241274-4656-4558-a456-29d74208d47d-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.549253 4731 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38241274-4656-4558-a456-29d74208d47d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.549263 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tkkm\" (UniqueName: \"kubernetes.io/projected/38241274-4656-4558-a456-29d74208d47d-kube-api-access-8tkkm\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:54 crc kubenswrapper[4731]: I1129 07:24:54.664337 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdf9m-config-bxcdh"] Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.501111 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff2928d9-150f-4305-a1bd-6a87ee7b40cc","Type":"ContainerStarted","Data":"7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a"} Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.502181 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.503418 4731 generic.go:334] "Generic (PLEG): container finished" podID="ca6f1ec3-5f0c-44e0-a722-926c25e87aab" containerID="406a1359ce3449e0fe1e4b20cd550dece14930b916b4660bf5490ea89ca993ee" exitCode=0 Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.503486 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdf9m-config-bxcdh" event={"ID":"ca6f1ec3-5f0c-44e0-a722-926c25e87aab","Type":"ContainerDied","Data":"406a1359ce3449e0fe1e4b20cd550dece14930b916b4660bf5490ea89ca993ee"} Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.503581 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdf9m-config-bxcdh" event={"ID":"ca6f1ec3-5f0c-44e0-a722-926c25e87aab","Type":"ContainerStarted","Data":"6937f3fcedda6ab9fe32f820aff3cbfb1d73995f30cddab87508e1757935f249"} Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.505065 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8w7f8" event={"ID":"6986d025-7080-457e-b2ce-88d8ae965c70","Type":"ContainerStarted","Data":"7c043bf1c4856c80b5a659661e3f100f1300b9fc6a0d697d71be6985fbb51be4"} Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.505210 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.531190 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=63.87395498 podStartE2EDuration="1m13.531164583s" podCreationTimestamp="2025-11-29 07:23:42 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.557107907 +0000 UTC m=+1084.447469010" lastFinishedPulling="2025-11-29 07:24:15.21431751 +0000 UTC m=+1094.104678613" observedRunningTime="2025-11-29 07:24:55.52526494 +0000 UTC m=+1134.415626043" watchObservedRunningTime="2025-11-29 07:24:55.531164583 +0000 UTC m=+1134.421525676" Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.564490 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8w7f8" podStartSLOduration=4.284743915 podStartE2EDuration="18.564468815s" podCreationTimestamp="2025-11-29 07:24:37 +0000 UTC" firstStartedPulling="2025-11-29 07:24:39.952917807 +0000 UTC m=+1118.843278910" lastFinishedPulling="2025-11-29 07:24:54.232642707 +0000 UTC m=+1133.123003810" observedRunningTime="2025-11-29 07:24:55.562110076 +0000 UTC m=+1134.452471179" watchObservedRunningTime="2025-11-29 07:24:55.564468815 +0000 UTC m=+1134.454829918" Nov 29 07:24:55 crc kubenswrapper[4731]: I1129 07:24:55.598672 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=63.415677642 podStartE2EDuration="1m12.598649064s" podCreationTimestamp="2025-11-29 07:23:43 +0000 UTC" firstStartedPulling="2025-11-29 07:24:05.603948055 +0000 UTC m=+1084.494309158" lastFinishedPulling="2025-11-29 07:24:14.786919487 +0000 UTC m=+1093.677280580" observedRunningTime="2025-11-29 07:24:55.590252408 +0000 UTC m=+1134.480613531" watchObservedRunningTime="2025-11-29 07:24:55.598649064 +0000 UTC m=+1134.489010167" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.867063 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.894672 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-additional-scripts\") pod \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.894819 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-log-ovn\") pod \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.894896 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run-ovn\") pod \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.894962 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ca6f1ec3-5f0c-44e0-a722-926c25e87aab" (UID: "ca6f1ec3-5f0c-44e0-a722-926c25e87aab"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.895008 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ca6f1ec3-5f0c-44e0-a722-926c25e87aab" (UID: "ca6f1ec3-5f0c-44e0-a722-926c25e87aab"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.895053 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfnjq\" (UniqueName: \"kubernetes.io/projected/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-kube-api-access-cfnjq\") pod \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.895081 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run\") pod \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.895127 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-scripts\") pod \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\" (UID: \"ca6f1ec3-5f0c-44e0-a722-926c25e87aab\") " Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.895152 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run" (OuterVolumeSpecName: "var-run") pod "ca6f1ec3-5f0c-44e0-a722-926c25e87aab" (UID: "ca6f1ec3-5f0c-44e0-a722-926c25e87aab"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.895798 4731 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.895824 4731 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.895836 4731 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-var-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.896736 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ca6f1ec3-5f0c-44e0-a722-926c25e87aab" (UID: "ca6f1ec3-5f0c-44e0-a722-926c25e87aab"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.896918 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-scripts" (OuterVolumeSpecName: "scripts") pod "ca6f1ec3-5f0c-44e0-a722-926c25e87aab" (UID: "ca6f1ec3-5f0c-44e0-a722-926c25e87aab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.902900 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-kube-api-access-cfnjq" (OuterVolumeSpecName: "kube-api-access-cfnjq") pod "ca6f1ec3-5f0c-44e0-a722-926c25e87aab" (UID: "ca6f1ec3-5f0c-44e0-a722-926c25e87aab"). InnerVolumeSpecName "kube-api-access-cfnjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.997697 4731 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.997750 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfnjq\" (UniqueName: \"kubernetes.io/projected/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-kube-api-access-cfnjq\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:56 crc kubenswrapper[4731]: I1129 07:24:56.997767 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca6f1ec3-5f0c-44e0-a722-926c25e87aab-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:57 crc kubenswrapper[4731]: I1129 07:24:57.529725 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdf9m-config-bxcdh" event={"ID":"ca6f1ec3-5f0c-44e0-a722-926c25e87aab","Type":"ContainerDied","Data":"6937f3fcedda6ab9fe32f820aff3cbfb1d73995f30cddab87508e1757935f249"} Nov 29 07:24:57 crc kubenswrapper[4731]: I1129 07:24:57.529769 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6937f3fcedda6ab9fe32f820aff3cbfb1d73995f30cddab87508e1757935f249" Nov 29 07:24:57 crc kubenswrapper[4731]: I1129 07:24:57.529781 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m-config-bxcdh" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.005728 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hdf9m-config-bxcdh"] Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.013761 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hdf9m-config-bxcdh"] Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.020796 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hdf9m" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.124519 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hdf9m-config-nk49w"] Nov 29 07:24:58 crc kubenswrapper[4731]: E1129 07:24:58.125378 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6f1ec3-5f0c-44e0-a722-926c25e87aab" containerName="ovn-config" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.125403 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6f1ec3-5f0c-44e0-a722-926c25e87aab" containerName="ovn-config" Nov 29 07:24:58 crc kubenswrapper[4731]: E1129 07:24:58.125439 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38241274-4656-4558-a456-29d74208d47d" containerName="swift-ring-rebalance" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.125448 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="38241274-4656-4558-a456-29d74208d47d" containerName="swift-ring-rebalance" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.125897 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca6f1ec3-5f0c-44e0-a722-926c25e87aab" containerName="ovn-config" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.125943 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="38241274-4656-4558-a456-29d74208d47d" containerName="swift-ring-rebalance" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.126987 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.134261 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.149644 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdf9m-config-nk49w"] Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.224178 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-additional-scripts\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.224253 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-log-ovn\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.224338 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run-ovn\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.224416 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-scripts\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.224452 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.224497 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w5t9\" (UniqueName: \"kubernetes.io/projected/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-kube-api-access-4w5t9\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.325732 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-scripts\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.326011 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.326137 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5t9\" (UniqueName: \"kubernetes.io/projected/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-kube-api-access-4w5t9\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.326261 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-additional-scripts\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.326354 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-log-ovn\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.326451 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run-ovn\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.326585 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run-ovn\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.326632 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-log-ovn\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.327340 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-additional-scripts\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.327405 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.329434 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-scripts\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.348151 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5t9\" (UniqueName: \"kubernetes.io/projected/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-kube-api-access-4w5t9\") pod \"ovn-controller-hdf9m-config-nk49w\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:58 crc kubenswrapper[4731]: I1129 07:24:58.456926 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:24:59 crc kubenswrapper[4731]: I1129 07:24:59.034135 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdf9m-config-nk49w"] Nov 29 07:24:59 crc kubenswrapper[4731]: I1129 07:24:59.548949 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdf9m-config-nk49w" event={"ID":"cd73fd6e-60fa-4da3-9fa4-6ae80be19615","Type":"ContainerStarted","Data":"61ae7030999f03540f528908dd08546c609eeb2204787a92413b7adeb226981d"} Nov 29 07:24:59 crc kubenswrapper[4731]: I1129 07:24:59.549359 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdf9m-config-nk49w" event={"ID":"cd73fd6e-60fa-4da3-9fa4-6ae80be19615","Type":"ContainerStarted","Data":"ba99e203ef3ead33a7ae743cd5af77f4c437e4184b780817cfca93bb3fdefa17"} Nov 29 07:24:59 crc kubenswrapper[4731]: I1129 07:24:59.574847 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hdf9m-config-nk49w" podStartSLOduration=1.574823994 podStartE2EDuration="1.574823994s" podCreationTimestamp="2025-11-29 07:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:24:59.571396484 +0000 UTC m=+1138.461757597" watchObservedRunningTime="2025-11-29 07:24:59.574823994 +0000 UTC m=+1138.465185097" Nov 29 07:24:59 crc kubenswrapper[4731]: I1129 07:24:59.819301 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca6f1ec3-5f0c-44e0-a722-926c25e87aab" path="/var/lib/kubelet/pods/ca6f1ec3-5f0c-44e0-a722-926c25e87aab/volumes" Nov 29 07:25:00 crc kubenswrapper[4731]: I1129 07:25:00.560871 4731 generic.go:334] "Generic (PLEG): container finished" podID="cd73fd6e-60fa-4da3-9fa4-6ae80be19615" containerID="61ae7030999f03540f528908dd08546c609eeb2204787a92413b7adeb226981d" exitCode=0 Nov 29 07:25:00 crc kubenswrapper[4731]: I1129 07:25:00.560919 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdf9m-config-nk49w" event={"ID":"cd73fd6e-60fa-4da3-9fa4-6ae80be19615","Type":"ContainerDied","Data":"61ae7030999f03540f528908dd08546c609eeb2204787a92413b7adeb226981d"} Nov 29 07:25:01 crc kubenswrapper[4731]: I1129 07:25:01.931175 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.005613 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-log-ovn\") pod \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.005687 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-additional-scripts\") pod \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.005768 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run\") pod \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.005808 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w5t9\" (UniqueName: \"kubernetes.io/projected/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-kube-api-access-4w5t9\") pod \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.005840 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-scripts\") pod \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.005975 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run-ovn\") pod \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\" (UID: \"cd73fd6e-60fa-4da3-9fa4-6ae80be19615\") " Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.006415 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cd73fd6e-60fa-4da3-9fa4-6ae80be19615" (UID: "cd73fd6e-60fa-4da3-9fa4-6ae80be19615"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.006452 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cd73fd6e-60fa-4da3-9fa4-6ae80be19615" (UID: "cd73fd6e-60fa-4da3-9fa4-6ae80be19615"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.007617 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run" (OuterVolumeSpecName: "var-run") pod "cd73fd6e-60fa-4da3-9fa4-6ae80be19615" (UID: "cd73fd6e-60fa-4da3-9fa4-6ae80be19615"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.008448 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-scripts" (OuterVolumeSpecName: "scripts") pod "cd73fd6e-60fa-4da3-9fa4-6ae80be19615" (UID: "cd73fd6e-60fa-4da3-9fa4-6ae80be19615"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.009336 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "cd73fd6e-60fa-4da3-9fa4-6ae80be19615" (UID: "cd73fd6e-60fa-4da3-9fa4-6ae80be19615"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.017085 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-kube-api-access-4w5t9" (OuterVolumeSpecName: "kube-api-access-4w5t9") pod "cd73fd6e-60fa-4da3-9fa4-6ae80be19615" (UID: "cd73fd6e-60fa-4da3-9fa4-6ae80be19615"). InnerVolumeSpecName "kube-api-access-4w5t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.094301 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hdf9m-config-nk49w"] Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.101805 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hdf9m-config-nk49w"] Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.108792 4731 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.108832 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w5t9\" (UniqueName: \"kubernetes.io/projected/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-kube-api-access-4w5t9\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.108848 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.108861 4731 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.108875 4731 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.108888 4731 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd73fd6e-60fa-4da3-9fa4-6ae80be19615-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.513813 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.521519 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/739c0608-5471-42a6-b062-4355cd1894a0-etc-swift\") pod \"swift-storage-0\" (UID: \"739c0608-5471-42a6-b062-4355cd1894a0\") " pod="openstack/swift-storage-0" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.592630 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba99e203ef3ead33a7ae743cd5af77f4c437e4184b780817cfca93bb3fdefa17" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.592661 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdf9m-config-nk49w" Nov 29 07:25:02 crc kubenswrapper[4731]: I1129 07:25:02.666302 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.002199 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.002719 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.002778 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.003542 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffbb4b4de78b7f58bb4f619008eb50ea899385afddcd0542f0d2036acafe5584"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.003663 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://ffbb4b4de78b7f58bb4f619008eb50ea899385afddcd0542f0d2036acafe5584" gracePeriod=600 Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.315721 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.603274 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"27654ba9b4e83a3f23a7d54d310326281430a8ecf3a27ee4071458b7dd99364c"} Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.607482 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="ffbb4b4de78b7f58bb4f619008eb50ea899385afddcd0542f0d2036acafe5584" exitCode=0 Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.607515 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"ffbb4b4de78b7f58bb4f619008eb50ea899385afddcd0542f0d2036acafe5584"} Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.607538 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"f21640b90c6a59e38b7b6b03ed6a9c7b8bee6bb7ce407b62721c202713562725"} Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.607556 4731 scope.go:117] "RemoveContainer" containerID="f623b0b449aeef3aba408365a10d9b3a882a155e1db4e4fae2a31dd92abc20ca" Nov 29 07:25:03 crc kubenswrapper[4731]: I1129 07:25:03.824068 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd73fd6e-60fa-4da3-9fa4-6ae80be19615" path="/var/lib/kubelet/pods/cd73fd6e-60fa-4da3-9fa4-6ae80be19615/volumes" Nov 29 07:25:04 crc kubenswrapper[4731]: I1129 07:25:04.629662 4731 generic.go:334] "Generic (PLEG): container finished" podID="6986d025-7080-457e-b2ce-88d8ae965c70" containerID="7c043bf1c4856c80b5a659661e3f100f1300b9fc6a0d697d71be6985fbb51be4" exitCode=0 Nov 29 07:25:04 crc kubenswrapper[4731]: I1129 07:25:04.629799 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8w7f8" event={"ID":"6986d025-7080-457e-b2ce-88d8ae965c70","Type":"ContainerDied","Data":"7c043bf1c4856c80b5a659661e3f100f1300b9fc6a0d697d71be6985fbb51be4"} Nov 29 07:25:04 crc kubenswrapper[4731]: I1129 07:25:04.677837 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 07:25:04 crc kubenswrapper[4731]: I1129 07:25:04.769402 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.246478 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-22e7-account-create-update-svpcx"] Nov 29 07:25:05 crc kubenswrapper[4731]: E1129 07:25:05.247636 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd73fd6e-60fa-4da3-9fa4-6ae80be19615" containerName="ovn-config" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.247660 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd73fd6e-60fa-4da3-9fa4-6ae80be19615" containerName="ovn-config" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.247891 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd73fd6e-60fa-4da3-9fa4-6ae80be19615" containerName="ovn-config" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.248738 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.253376 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.267128 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wzpzv"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.268683 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.294441 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-22e7-account-create-update-svpcx"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.336550 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wzpzv"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.392834 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94728e01-e829-4d10-9311-defe6cd10ff9-operator-scripts\") pod \"barbican-22e7-account-create-update-svpcx\" (UID: \"94728e01-e829-4d10-9311-defe6cd10ff9\") " pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.392993 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45576\" (UniqueName: \"kubernetes.io/projected/94728e01-e829-4d10-9311-defe6cd10ff9-kube-api-access-45576\") pod \"barbican-22e7-account-create-update-svpcx\" (UID: \"94728e01-e829-4d10-9311-defe6cd10ff9\") " pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.393088 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jvc2\" (UniqueName: \"kubernetes.io/projected/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-kube-api-access-8jvc2\") pod \"cinder-db-create-wzpzv\" (UID: \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\") " pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.393195 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-operator-scripts\") pod \"cinder-db-create-wzpzv\" (UID: \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\") " pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.394709 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ktq2t"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.396121 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.416666 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1fef-account-create-update-k9ddk"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.417898 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.420832 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.450300 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ktq2t"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.458870 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1fef-account-create-update-k9ddk"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.496722 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94728e01-e829-4d10-9311-defe6cd10ff9-operator-scripts\") pod \"barbican-22e7-account-create-update-svpcx\" (UID: \"94728e01-e829-4d10-9311-defe6cd10ff9\") " pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.496779 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d75320ff-8458-4ed0-977c-46e972527687-operator-scripts\") pod \"barbican-db-create-ktq2t\" (UID: \"d75320ff-8458-4ed0-977c-46e972527687\") " pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.496824 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45576\" (UniqueName: \"kubernetes.io/projected/94728e01-e829-4d10-9311-defe6cd10ff9-kube-api-access-45576\") pod \"barbican-22e7-account-create-update-svpcx\" (UID: \"94728e01-e829-4d10-9311-defe6cd10ff9\") " pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.496860 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jvc2\" (UniqueName: \"kubernetes.io/projected/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-kube-api-access-8jvc2\") pod \"cinder-db-create-wzpzv\" (UID: \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\") " pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.496908 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-operator-scripts\") pod \"cinder-db-create-wzpzv\" (UID: \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\") " pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.496931 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9dvk\" (UniqueName: \"kubernetes.io/projected/d75320ff-8458-4ed0-977c-46e972527687-kube-api-access-g9dvk\") pod \"barbican-db-create-ktq2t\" (UID: \"d75320ff-8458-4ed0-977c-46e972527687\") " pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.497694 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94728e01-e829-4d10-9311-defe6cd10ff9-operator-scripts\") pod \"barbican-22e7-account-create-update-svpcx\" (UID: \"94728e01-e829-4d10-9311-defe6cd10ff9\") " pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.498507 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-operator-scripts\") pod \"cinder-db-create-wzpzv\" (UID: \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\") " pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.508690 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ss292"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.509804 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ss292" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.523276 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ss292"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.530972 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jvc2\" (UniqueName: \"kubernetes.io/projected/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-kube-api-access-8jvc2\") pod \"cinder-db-create-wzpzv\" (UID: \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\") " pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.535731 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-nft5q"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.549503 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45576\" (UniqueName: \"kubernetes.io/projected/94728e01-e829-4d10-9311-defe6cd10ff9-kube-api-access-45576\") pod \"barbican-22e7-account-create-update-svpcx\" (UID: \"94728e01-e829-4d10-9311-defe6cd10ff9\") " pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.551180 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.557112 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.557247 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4wnsc" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.557300 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.557375 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.599894 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9dvk\" (UniqueName: \"kubernetes.io/projected/d75320ff-8458-4ed0-977c-46e972527687-kube-api-access-g9dvk\") pod \"barbican-db-create-ktq2t\" (UID: \"d75320ff-8458-4ed0-977c-46e972527687\") " pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.599964 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnswz\" (UniqueName: \"kubernetes.io/projected/450170e3-d7cb-4283-bae9-3350a8558f66-kube-api-access-fnswz\") pod \"cinder-1fef-account-create-update-k9ddk\" (UID: \"450170e3-d7cb-4283-bae9-3350a8558f66\") " pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.600007 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/450170e3-d7cb-4283-bae9-3350a8558f66-operator-scripts\") pod \"cinder-1fef-account-create-update-k9ddk\" (UID: \"450170e3-d7cb-4283-bae9-3350a8558f66\") " pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.600046 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d75320ff-8458-4ed0-977c-46e972527687-operator-scripts\") pod \"barbican-db-create-ktq2t\" (UID: \"d75320ff-8458-4ed0-977c-46e972527687\") " pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.600779 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d75320ff-8458-4ed0-977c-46e972527687-operator-scripts\") pod \"barbican-db-create-ktq2t\" (UID: \"d75320ff-8458-4ed0-977c-46e972527687\") " pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.608449 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nft5q"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.629845 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9dvk\" (UniqueName: \"kubernetes.io/projected/d75320ff-8458-4ed0-977c-46e972527687-kube-api-access-g9dvk\") pod \"barbican-db-create-ktq2t\" (UID: \"d75320ff-8458-4ed0-977c-46e972527687\") " pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.656325 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.657632 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ba51-account-create-update-drl7b"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.661640 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.670007 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ba51-account-create-update-drl7b"] Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.674488 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.675322 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.679953 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"3361e05860f1c8aef40ef4ba75005b1f024aecf3480ee71bff6660d14142e67a"} Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.680633 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"45620b0c751aeff263483feba03e6600a6d6b8eeaee6cb18ea651e63d7aafc2c"} Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.701610 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j8sh\" (UniqueName: \"kubernetes.io/projected/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-kube-api-access-4j8sh\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.701740 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns97x\" (UniqueName: \"kubernetes.io/projected/456fff3a-5ed5-4def-b25d-3923d97a3577-kube-api-access-ns97x\") pod \"neutron-db-create-ss292\" (UID: \"456fff3a-5ed5-4def-b25d-3923d97a3577\") " pod="openstack/neutron-db-create-ss292" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.701802 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456fff3a-5ed5-4def-b25d-3923d97a3577-operator-scripts\") pod \"neutron-db-create-ss292\" (UID: \"456fff3a-5ed5-4def-b25d-3923d97a3577\") " pod="openstack/neutron-db-create-ss292" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.701915 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnswz\" (UniqueName: \"kubernetes.io/projected/450170e3-d7cb-4283-bae9-3350a8558f66-kube-api-access-fnswz\") pod \"cinder-1fef-account-create-update-k9ddk\" (UID: \"450170e3-d7cb-4283-bae9-3350a8558f66\") " pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.704244 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/450170e3-d7cb-4283-bae9-3350a8558f66-operator-scripts\") pod \"cinder-1fef-account-create-update-k9ddk\" (UID: \"450170e3-d7cb-4283-bae9-3350a8558f66\") " pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.704317 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-combined-ca-bundle\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.704369 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-config-data\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.705962 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/450170e3-d7cb-4283-bae9-3350a8558f66-operator-scripts\") pod \"cinder-1fef-account-create-update-k9ddk\" (UID: \"450170e3-d7cb-4283-bae9-3350a8558f66\") " pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.738123 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.754160 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnswz\" (UniqueName: \"kubernetes.io/projected/450170e3-d7cb-4283-bae9-3350a8558f66-kube-api-access-fnswz\") pod \"cinder-1fef-account-create-update-k9ddk\" (UID: \"450170e3-d7cb-4283-bae9-3350a8558f66\") " pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.825731 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns97x\" (UniqueName: \"kubernetes.io/projected/456fff3a-5ed5-4def-b25d-3923d97a3577-kube-api-access-ns97x\") pod \"neutron-db-create-ss292\" (UID: \"456fff3a-5ed5-4def-b25d-3923d97a3577\") " pod="openstack/neutron-db-create-ss292" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.825809 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfklv\" (UniqueName: \"kubernetes.io/projected/a9abe846-7302-4ea1-8423-bc1a2e81d051-kube-api-access-vfklv\") pod \"neutron-ba51-account-create-update-drl7b\" (UID: \"a9abe846-7302-4ea1-8423-bc1a2e81d051\") " pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.825844 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456fff3a-5ed5-4def-b25d-3923d97a3577-operator-scripts\") pod \"neutron-db-create-ss292\" (UID: \"456fff3a-5ed5-4def-b25d-3923d97a3577\") " pod="openstack/neutron-db-create-ss292" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.825931 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-combined-ca-bundle\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.825959 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-config-data\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.826002 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9abe846-7302-4ea1-8423-bc1a2e81d051-operator-scripts\") pod \"neutron-ba51-account-create-update-drl7b\" (UID: \"a9abe846-7302-4ea1-8423-bc1a2e81d051\") " pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.826039 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j8sh\" (UniqueName: \"kubernetes.io/projected/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-kube-api-access-4j8sh\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.827376 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456fff3a-5ed5-4def-b25d-3923d97a3577-operator-scripts\") pod \"neutron-db-create-ss292\" (UID: \"456fff3a-5ed5-4def-b25d-3923d97a3577\") " pod="openstack/neutron-db-create-ss292" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.833312 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-config-data\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.840178 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-combined-ca-bundle\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.864439 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j8sh\" (UniqueName: \"kubernetes.io/projected/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-kube-api-access-4j8sh\") pod \"keystone-db-sync-nft5q\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.869149 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns97x\" (UniqueName: \"kubernetes.io/projected/456fff3a-5ed5-4def-b25d-3923d97a3577-kube-api-access-ns97x\") pod \"neutron-db-create-ss292\" (UID: \"456fff3a-5ed5-4def-b25d-3923d97a3577\") " pod="openstack/neutron-db-create-ss292" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.929929 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfklv\" (UniqueName: \"kubernetes.io/projected/a9abe846-7302-4ea1-8423-bc1a2e81d051-kube-api-access-vfklv\") pod \"neutron-ba51-account-create-update-drl7b\" (UID: \"a9abe846-7302-4ea1-8423-bc1a2e81d051\") " pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.930139 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9abe846-7302-4ea1-8423-bc1a2e81d051-operator-scripts\") pod \"neutron-ba51-account-create-update-drl7b\" (UID: \"a9abe846-7302-4ea1-8423-bc1a2e81d051\") " pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.931988 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9abe846-7302-4ea1-8423-bc1a2e81d051-operator-scripts\") pod \"neutron-ba51-account-create-update-drl7b\" (UID: \"a9abe846-7302-4ea1-8423-bc1a2e81d051\") " pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.955097 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfklv\" (UniqueName: \"kubernetes.io/projected/a9abe846-7302-4ea1-8423-bc1a2e81d051-kube-api-access-vfklv\") pod \"neutron-ba51-account-create-update-drl7b\" (UID: \"a9abe846-7302-4ea1-8423-bc1a2e81d051\") " pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:05 crc kubenswrapper[4731]: I1129 07:25:05.983226 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ss292" Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.023343 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.045899 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.049830 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.689796 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ktq2t"] Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.694523 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"23a87f31cf491bcaedc3cf21d4847cad0453e6f69274b52e9706b9fd9f2a5b64"} Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.694665 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"6f3662b833561a55071ee5e24ff992b4d50dac3a9b7818ec13728de4f81e02e2"} Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.700199 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-22e7-account-create-update-svpcx"] Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.823628 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ss292"] Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.833012 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wzpzv"] Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.917447 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1fef-account-create-update-k9ddk"] Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.966433 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ba51-account-create-update-drl7b"] Nov 29 07:25:06 crc kubenswrapper[4731]: I1129 07:25:06.986610 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nft5q"] Nov 29 07:25:06 crc kubenswrapper[4731]: W1129 07:25:06.995006 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9abe846_7302_4ea1_8423_bc1a2e81d051.slice/crio-04ebb15bcb74036c5a947bf5c2a41d9d59cb0efaca7bfe53b19228d0a8e11adf WatchSource:0}: Error finding container 04ebb15bcb74036c5a947bf5c2a41d9d59cb0efaca7bfe53b19228d0a8e11adf: Status 404 returned error can't find the container with id 04ebb15bcb74036c5a947bf5c2a41d9d59cb0efaca7bfe53b19228d0a8e11adf Nov 29 07:25:07 crc kubenswrapper[4731]: W1129 07:25:07.099076 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcdaaa2a_ccbf_4158_8c8a_d5836dbdd111.slice/crio-a0fec04919f7387e22f79c3c4ec4745e9591014c2175e6cc6f943d0fa608a854 WatchSource:0}: Error finding container a0fec04919f7387e22f79c3c4ec4745e9591014c2175e6cc6f943d0fa608a854: Status 404 returned error can't find the container with id a0fec04919f7387e22f79c3c4ec4745e9591014c2175e6cc6f943d0fa608a854 Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.107434 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8w7f8" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.258374 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkb5x\" (UniqueName: \"kubernetes.io/projected/6986d025-7080-457e-b2ce-88d8ae965c70-kube-api-access-kkb5x\") pod \"6986d025-7080-457e-b2ce-88d8ae965c70\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.258611 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-config-data\") pod \"6986d025-7080-457e-b2ce-88d8ae965c70\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.258645 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-combined-ca-bundle\") pod \"6986d025-7080-457e-b2ce-88d8ae965c70\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.258662 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-db-sync-config-data\") pod \"6986d025-7080-457e-b2ce-88d8ae965c70\" (UID: \"6986d025-7080-457e-b2ce-88d8ae965c70\") " Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.265526 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6986d025-7080-457e-b2ce-88d8ae965c70-kube-api-access-kkb5x" (OuterVolumeSpecName: "kube-api-access-kkb5x") pod "6986d025-7080-457e-b2ce-88d8ae965c70" (UID: "6986d025-7080-457e-b2ce-88d8ae965c70"). InnerVolumeSpecName "kube-api-access-kkb5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.265998 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6986d025-7080-457e-b2ce-88d8ae965c70" (UID: "6986d025-7080-457e-b2ce-88d8ae965c70"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.285437 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6986d025-7080-457e-b2ce-88d8ae965c70" (UID: "6986d025-7080-457e-b2ce-88d8ae965c70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.304878 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-config-data" (OuterVolumeSpecName: "config-data") pod "6986d025-7080-457e-b2ce-88d8ae965c70" (UID: "6986d025-7080-457e-b2ce-88d8ae965c70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.360217 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkb5x\" (UniqueName: \"kubernetes.io/projected/6986d025-7080-457e-b2ce-88d8ae965c70-kube-api-access-kkb5x\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.360263 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.360274 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.360282 4731 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6986d025-7080-457e-b2ce-88d8ae965c70-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.706672 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-22e7-account-create-update-svpcx" event={"ID":"94728e01-e829-4d10-9311-defe6cd10ff9","Type":"ContainerStarted","Data":"7aa1b72b92178af905e705d7d4f905ad652f09f011c691b052ef37ff55d8b417"} Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.709398 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzpzv" event={"ID":"5b04cdb0-e1e8-4807-8fd3-6f2086497c72","Type":"ContainerStarted","Data":"6d85f5164ca5272fb268443bdbb9266785d51ed30276381d4d2dbf8adbada38c"} Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.711534 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8w7f8" event={"ID":"6986d025-7080-457e-b2ce-88d8ae965c70","Type":"ContainerDied","Data":"c76f8dcb4c2ebf7335717db1dcc32daea147cbf7d7483d2c6d1788dea7c0c6c1"} Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.711660 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c76f8dcb4c2ebf7335717db1dcc32daea147cbf7d7483d2c6d1788dea7c0c6c1" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.711766 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8w7f8" Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.713398 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nft5q" event={"ID":"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111","Type":"ContainerStarted","Data":"a0fec04919f7387e22f79c3c4ec4745e9591014c2175e6cc6f943d0fa608a854"} Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.716312 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ss292" event={"ID":"456fff3a-5ed5-4def-b25d-3923d97a3577","Type":"ContainerStarted","Data":"33b80beff04178a27bff4d950e88bcc8988b0584468010d62d4e17562e93c3de"} Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.717726 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1fef-account-create-update-k9ddk" event={"ID":"450170e3-d7cb-4283-bae9-3350a8558f66","Type":"ContainerStarted","Data":"9437b0e44a6b1b11889e0dd82f0b0526c76b7b14f7b4be388d815d238e8357ea"} Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.718976 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba51-account-create-update-drl7b" event={"ID":"a9abe846-7302-4ea1-8423-bc1a2e81d051","Type":"ContainerStarted","Data":"04ebb15bcb74036c5a947bf5c2a41d9d59cb0efaca7bfe53b19228d0a8e11adf"} Nov 29 07:25:07 crc kubenswrapper[4731]: I1129 07:25:07.720090 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ktq2t" event={"ID":"d75320ff-8458-4ed0-977c-46e972527687","Type":"ContainerStarted","Data":"5c16f19cd7b31e218eb7b4e82fac8f83ca486403b4c0d764785c47c93d316488"} Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.586253 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-mbkd6"] Nov 29 07:25:08 crc kubenswrapper[4731]: E1129 07:25:08.587467 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6986d025-7080-457e-b2ce-88d8ae965c70" containerName="glance-db-sync" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.587486 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6986d025-7080-457e-b2ce-88d8ae965c70" containerName="glance-db-sync" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.587680 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6986d025-7080-457e-b2ce-88d8ae965c70" containerName="glance-db-sync" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.588511 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.601853 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-mbkd6"] Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.603605 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.603667 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-dns-svc\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.603692 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmgb\" (UniqueName: \"kubernetes.io/projected/9b038345-8fed-4ada-844d-92ac4791d91b-kube-api-access-ljmgb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.603709 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-config\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.603757 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.705831 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-dns-svc\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.705880 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmgb\" (UniqueName: \"kubernetes.io/projected/9b038345-8fed-4ada-844d-92ac4791d91b-kube-api-access-ljmgb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.705908 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-config\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.705962 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.706031 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.707098 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.707641 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-dns-svc\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.708432 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-config\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.709099 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.759356 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmgb\" (UniqueName: \"kubernetes.io/projected/9b038345-8fed-4ada-844d-92ac4791d91b-kube-api-access-ljmgb\") pod \"dnsmasq-dns-74dc88fc-mbkd6\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.769823 4731 generic.go:334] "Generic (PLEG): container finished" podID="94728e01-e829-4d10-9311-defe6cd10ff9" containerID="639c771777e2016fad42d31fe532b7bf9e1ee8e9b1092ee16ef8c069928fd77e" exitCode=0 Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.769894 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-22e7-account-create-update-svpcx" event={"ID":"94728e01-e829-4d10-9311-defe6cd10ff9","Type":"ContainerDied","Data":"639c771777e2016fad42d31fe532b7bf9e1ee8e9b1092ee16ef8c069928fd77e"} Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.772666 4731 generic.go:334] "Generic (PLEG): container finished" podID="5b04cdb0-e1e8-4807-8fd3-6f2086497c72" containerID="e1de5374a22c79286e042c9445444e7faca5f78e241ca9d72ffffacd09e0b5a0" exitCode=0 Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.772728 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzpzv" event={"ID":"5b04cdb0-e1e8-4807-8fd3-6f2086497c72","Type":"ContainerDied","Data":"e1de5374a22c79286e042c9445444e7faca5f78e241ca9d72ffffacd09e0b5a0"} Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.775040 4731 generic.go:334] "Generic (PLEG): container finished" podID="456fff3a-5ed5-4def-b25d-3923d97a3577" containerID="2f11ef3592a58828467d08211ca586d70142bdc44a7061304720261feb1c6891" exitCode=0 Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.775092 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ss292" event={"ID":"456fff3a-5ed5-4def-b25d-3923d97a3577","Type":"ContainerDied","Data":"2f11ef3592a58828467d08211ca586d70142bdc44a7061304720261feb1c6891"} Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.789479 4731 generic.go:334] "Generic (PLEG): container finished" podID="450170e3-d7cb-4283-bae9-3350a8558f66" containerID="3e522f983f4993d28b71672251b90ac846f2009f9e801c28d90b1bb603272c5d" exitCode=0 Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.789789 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1fef-account-create-update-k9ddk" event={"ID":"450170e3-d7cb-4283-bae9-3350a8558f66","Type":"ContainerDied","Data":"3e522f983f4993d28b71672251b90ac846f2009f9e801c28d90b1bb603272c5d"} Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.824431 4731 generic.go:334] "Generic (PLEG): container finished" podID="a9abe846-7302-4ea1-8423-bc1a2e81d051" containerID="3a9b264157ef5c375c44ced99437adea4f91d0bc7471d42c109d31ab6ce49779" exitCode=0 Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.824619 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba51-account-create-update-drl7b" event={"ID":"a9abe846-7302-4ea1-8423-bc1a2e81d051","Type":"ContainerDied","Data":"3a9b264157ef5c375c44ced99437adea4f91d0bc7471d42c109d31ab6ce49779"} Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.840086 4731 generic.go:334] "Generic (PLEG): container finished" podID="d75320ff-8458-4ed0-977c-46e972527687" containerID="18b3c42a964d1e2df816c6a241a0d9500ac5585327741e5290d50e3631b900bf" exitCode=0 Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.840252 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ktq2t" event={"ID":"d75320ff-8458-4ed0-977c-46e972527687","Type":"ContainerDied","Data":"18b3c42a964d1e2df816c6a241a0d9500ac5585327741e5290d50e3631b900bf"} Nov 29 07:25:08 crc kubenswrapper[4731]: I1129 07:25:08.940458 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:09 crc kubenswrapper[4731]: E1129 07:25:09.006428 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9abe846_7302_4ea1_8423_bc1a2e81d051.slice/crio-conmon-3a9b264157ef5c375c44ced99437adea4f91d0bc7471d42c109d31ab6ce49779.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456fff3a_5ed5_4def_b25d_3923d97a3577.slice/crio-2f11ef3592a58828467d08211ca586d70142bdc44a7061304720261feb1c6891.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456fff3a_5ed5_4def_b25d_3923d97a3577.slice/crio-conmon-2f11ef3592a58828467d08211ca586d70142bdc44a7061304720261feb1c6891.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9abe846_7302_4ea1_8423_bc1a2e81d051.slice/crio-3a9b264157ef5c375c44ced99437adea4f91d0bc7471d42c109d31ab6ce49779.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:25:09 crc kubenswrapper[4731]: I1129 07:25:09.819443 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-mbkd6"] Nov 29 07:25:09 crc kubenswrapper[4731]: I1129 07:25:09.852541 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" event={"ID":"9b038345-8fed-4ada-844d-92ac4791d91b","Type":"ContainerStarted","Data":"c3d27d83ebbe71d3b3139738476b061d2159fd0e8a91d73edc26f678d6e0e026"} Nov 29 07:25:09 crc kubenswrapper[4731]: I1129 07:25:09.855925 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"b3029908e1e5868bba3a897a2ccb47618a3293b7ca1cc3e7c8027696011ab051"} Nov 29 07:25:09 crc kubenswrapper[4731]: I1129 07:25:09.855988 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"92e2d6b4e459db25b425448bbd5308a2a35b2ae8d48f3ad9cc2b521363c0a8ab"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.171053 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ss292" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.236765 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns97x\" (UniqueName: \"kubernetes.io/projected/456fff3a-5ed5-4def-b25d-3923d97a3577-kube-api-access-ns97x\") pod \"456fff3a-5ed5-4def-b25d-3923d97a3577\" (UID: \"456fff3a-5ed5-4def-b25d-3923d97a3577\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.236940 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456fff3a-5ed5-4def-b25d-3923d97a3577-operator-scripts\") pod \"456fff3a-5ed5-4def-b25d-3923d97a3577\" (UID: \"456fff3a-5ed5-4def-b25d-3923d97a3577\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.239860 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/456fff3a-5ed5-4def-b25d-3923d97a3577-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "456fff3a-5ed5-4def-b25d-3923d97a3577" (UID: "456fff3a-5ed5-4def-b25d-3923d97a3577"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.246973 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456fff3a-5ed5-4def-b25d-3923d97a3577-kube-api-access-ns97x" (OuterVolumeSpecName: "kube-api-access-ns97x") pod "456fff3a-5ed5-4def-b25d-3923d97a3577" (UID: "456fff3a-5ed5-4def-b25d-3923d97a3577"). InnerVolumeSpecName "kube-api-access-ns97x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.286367 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.339604 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/450170e3-d7cb-4283-bae9-3350a8558f66-operator-scripts\") pod \"450170e3-d7cb-4283-bae9-3350a8558f66\" (UID: \"450170e3-d7cb-4283-bae9-3350a8558f66\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.339682 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnswz\" (UniqueName: \"kubernetes.io/projected/450170e3-d7cb-4283-bae9-3350a8558f66-kube-api-access-fnswz\") pod \"450170e3-d7cb-4283-bae9-3350a8558f66\" (UID: \"450170e3-d7cb-4283-bae9-3350a8558f66\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.340121 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450170e3-d7cb-4283-bae9-3350a8558f66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "450170e3-d7cb-4283-bae9-3350a8558f66" (UID: "450170e3-d7cb-4283-bae9-3350a8558f66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.340264 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns97x\" (UniqueName: \"kubernetes.io/projected/456fff3a-5ed5-4def-b25d-3923d97a3577-kube-api-access-ns97x\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.340282 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456fff3a-5ed5-4def-b25d-3923d97a3577-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.340293 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/450170e3-d7cb-4283-bae9-3350a8558f66-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.345986 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450170e3-d7cb-4283-bae9-3350a8558f66-kube-api-access-fnswz" (OuterVolumeSpecName: "kube-api-access-fnswz") pod "450170e3-d7cb-4283-bae9-3350a8558f66" (UID: "450170e3-d7cb-4283-bae9-3350a8558f66"). InnerVolumeSpecName "kube-api-access-fnswz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.442085 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnswz\" (UniqueName: \"kubernetes.io/projected/450170e3-d7cb-4283-bae9-3350a8558f66-kube-api-access-fnswz\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.638233 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.646866 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.656530 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.697278 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.758971 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45576\" (UniqueName: \"kubernetes.io/projected/94728e01-e829-4d10-9311-defe6cd10ff9-kube-api-access-45576\") pod \"94728e01-e829-4d10-9311-defe6cd10ff9\" (UID: \"94728e01-e829-4d10-9311-defe6cd10ff9\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.759077 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94728e01-e829-4d10-9311-defe6cd10ff9-operator-scripts\") pod \"94728e01-e829-4d10-9311-defe6cd10ff9\" (UID: \"94728e01-e829-4d10-9311-defe6cd10ff9\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.759176 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jvc2\" (UniqueName: \"kubernetes.io/projected/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-kube-api-access-8jvc2\") pod \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\" (UID: \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.759233 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfklv\" (UniqueName: \"kubernetes.io/projected/a9abe846-7302-4ea1-8423-bc1a2e81d051-kube-api-access-vfklv\") pod \"a9abe846-7302-4ea1-8423-bc1a2e81d051\" (UID: \"a9abe846-7302-4ea1-8423-bc1a2e81d051\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.759295 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9abe846-7302-4ea1-8423-bc1a2e81d051-operator-scripts\") pod \"a9abe846-7302-4ea1-8423-bc1a2e81d051\" (UID: \"a9abe846-7302-4ea1-8423-bc1a2e81d051\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.759379 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-operator-scripts\") pod \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\" (UID: \"5b04cdb0-e1e8-4807-8fd3-6f2086497c72\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.767752 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-kube-api-access-8jvc2" (OuterVolumeSpecName: "kube-api-access-8jvc2") pod "5b04cdb0-e1e8-4807-8fd3-6f2086497c72" (UID: "5b04cdb0-e1e8-4807-8fd3-6f2086497c72"). InnerVolumeSpecName "kube-api-access-8jvc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.768200 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94728e01-e829-4d10-9311-defe6cd10ff9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94728e01-e829-4d10-9311-defe6cd10ff9" (UID: "94728e01-e829-4d10-9311-defe6cd10ff9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.768425 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9abe846-7302-4ea1-8423-bc1a2e81d051-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9abe846-7302-4ea1-8423-bc1a2e81d051" (UID: "a9abe846-7302-4ea1-8423-bc1a2e81d051"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.770449 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b04cdb0-e1e8-4807-8fd3-6f2086497c72" (UID: "5b04cdb0-e1e8-4807-8fd3-6f2086497c72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.772232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94728e01-e829-4d10-9311-defe6cd10ff9-kube-api-access-45576" (OuterVolumeSpecName: "kube-api-access-45576") pod "94728e01-e829-4d10-9311-defe6cd10ff9" (UID: "94728e01-e829-4d10-9311-defe6cd10ff9"). InnerVolumeSpecName "kube-api-access-45576". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.777921 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9abe846-7302-4ea1-8423-bc1a2e81d051-kube-api-access-vfklv" (OuterVolumeSpecName: "kube-api-access-vfklv") pod "a9abe846-7302-4ea1-8423-bc1a2e81d051" (UID: "a9abe846-7302-4ea1-8423-bc1a2e81d051"). InnerVolumeSpecName "kube-api-access-vfklv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.861496 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9dvk\" (UniqueName: \"kubernetes.io/projected/d75320ff-8458-4ed0-977c-46e972527687-kube-api-access-g9dvk\") pod \"d75320ff-8458-4ed0-977c-46e972527687\" (UID: \"d75320ff-8458-4ed0-977c-46e972527687\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.861627 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d75320ff-8458-4ed0-977c-46e972527687-operator-scripts\") pod \"d75320ff-8458-4ed0-977c-46e972527687\" (UID: \"d75320ff-8458-4ed0-977c-46e972527687\") " Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.862415 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.862436 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45576\" (UniqueName: \"kubernetes.io/projected/94728e01-e829-4d10-9311-defe6cd10ff9-kube-api-access-45576\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.862452 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94728e01-e829-4d10-9311-defe6cd10ff9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.862465 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jvc2\" (UniqueName: \"kubernetes.io/projected/5b04cdb0-e1e8-4807-8fd3-6f2086497c72-kube-api-access-8jvc2\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.862492 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfklv\" (UniqueName: \"kubernetes.io/projected/a9abe846-7302-4ea1-8423-bc1a2e81d051-kube-api-access-vfklv\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.862503 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9abe846-7302-4ea1-8423-bc1a2e81d051-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.863524 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d75320ff-8458-4ed0-977c-46e972527687-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d75320ff-8458-4ed0-977c-46e972527687" (UID: "d75320ff-8458-4ed0-977c-46e972527687"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.866877 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75320ff-8458-4ed0-977c-46e972527687-kube-api-access-g9dvk" (OuterVolumeSpecName: "kube-api-access-g9dvk") pod "d75320ff-8458-4ed0-977c-46e972527687" (UID: "d75320ff-8458-4ed0-977c-46e972527687"). InnerVolumeSpecName "kube-api-access-g9dvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.875928 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba51-account-create-update-drl7b" event={"ID":"a9abe846-7302-4ea1-8423-bc1a2e81d051","Type":"ContainerDied","Data":"04ebb15bcb74036c5a947bf5c2a41d9d59cb0efaca7bfe53b19228d0a8e11adf"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.875987 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ebb15bcb74036c5a947bf5c2a41d9d59cb0efaca7bfe53b19228d0a8e11adf" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.876085 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba51-account-create-update-drl7b" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.884131 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ktq2t" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.884217 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ktq2t" event={"ID":"d75320ff-8458-4ed0-977c-46e972527687","Type":"ContainerDied","Data":"5c16f19cd7b31e218eb7b4e82fac8f83ca486403b4c0d764785c47c93d316488"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.884305 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c16f19cd7b31e218eb7b4e82fac8f83ca486403b4c0d764785c47c93d316488" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.888222 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-22e7-account-create-update-svpcx" event={"ID":"94728e01-e829-4d10-9311-defe6cd10ff9","Type":"ContainerDied","Data":"7aa1b72b92178af905e705d7d4f905ad652f09f011c691b052ef37ff55d8b417"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.888276 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa1b72b92178af905e705d7d4f905ad652f09f011c691b052ef37ff55d8b417" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.888326 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-22e7-account-create-update-svpcx" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.895134 4731 generic.go:334] "Generic (PLEG): container finished" podID="9b038345-8fed-4ada-844d-92ac4791d91b" containerID="99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd" exitCode=0 Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.895266 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" event={"ID":"9b038345-8fed-4ada-844d-92ac4791d91b","Type":"ContainerDied","Data":"99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.908499 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"a8bc9e8bd5e624df6057fefe13842cce7aaa40b89d09fe087954b4ccf933c398"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.908593 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"5117c6ec700465ff0d212525ea1e2764e00a38f9e17cb28eaff4c68c32c689b2"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.919314 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzpzv" event={"ID":"5b04cdb0-e1e8-4807-8fd3-6f2086497c72","Type":"ContainerDied","Data":"6d85f5164ca5272fb268443bdbb9266785d51ed30276381d4d2dbf8adbada38c"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.919345 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d85f5164ca5272fb268443bdbb9266785d51ed30276381d4d2dbf8adbada38c" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.919730 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzpzv" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.923482 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ss292" event={"ID":"456fff3a-5ed5-4def-b25d-3923d97a3577","Type":"ContainerDied","Data":"33b80beff04178a27bff4d950e88bcc8988b0584468010d62d4e17562e93c3de"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.923585 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33b80beff04178a27bff4d950e88bcc8988b0584468010d62d4e17562e93c3de" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.923711 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ss292" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.938412 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1fef-account-create-update-k9ddk" event={"ID":"450170e3-d7cb-4283-bae9-3350a8558f66","Type":"ContainerDied","Data":"9437b0e44a6b1b11889e0dd82f0b0526c76b7b14f7b4be388d815d238e8357ea"} Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.938462 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9437b0e44a6b1b11889e0dd82f0b0526c76b7b14f7b4be388d815d238e8357ea" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.938555 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1fef-account-create-update-k9ddk" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.964635 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9dvk\" (UniqueName: \"kubernetes.io/projected/d75320ff-8458-4ed0-977c-46e972527687-kube-api-access-g9dvk\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:10 crc kubenswrapper[4731]: I1129 07:25:10.964680 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d75320ff-8458-4ed0-977c-46e972527687-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:15 crc kubenswrapper[4731]: I1129 07:25:15.992358 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" event={"ID":"9b038345-8fed-4ada-844d-92ac4791d91b","Type":"ContainerStarted","Data":"48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1"} Nov 29 07:25:15 crc kubenswrapper[4731]: I1129 07:25:15.993267 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:16 crc kubenswrapper[4731]: I1129 07:25:16.030043 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" podStartSLOduration=8.030017801 podStartE2EDuration="8.030017801s" podCreationTimestamp="2025-11-29 07:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:16.020860564 +0000 UTC m=+1154.911221677" watchObservedRunningTime="2025-11-29 07:25:16.030017801 +0000 UTC m=+1154.920378904" Nov 29 07:25:21 crc kubenswrapper[4731]: I1129 07:25:21.050958 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"2076cd2a26ba8c9b3e632019ae67bf42f03184272120c95cf44de369c3b817a6"} Nov 29 07:25:21 crc kubenswrapper[4731]: I1129 07:25:21.051765 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"d2ca1db4b405d0127fa1d5c76d303f0845e8f75da0ebb26a698c5c19d3b89c07"} Nov 29 07:25:21 crc kubenswrapper[4731]: I1129 07:25:21.051793 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"6a24ce29743d5e5e455ecfb6a878b240d03a2244f54ca1afba8e281bd99292fc"} Nov 29 07:25:21 crc kubenswrapper[4731]: I1129 07:25:21.054187 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nft5q" event={"ID":"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111","Type":"ContainerStarted","Data":"6589041cde6b21ef09af6738cc65ac22979f13b42abafd743cfd680e5ed860b9"} Nov 29 07:25:21 crc kubenswrapper[4731]: I1129 07:25:21.083300 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-nft5q" podStartSLOduration=3.212241822 podStartE2EDuration="16.083270409s" podCreationTimestamp="2025-11-29 07:25:05 +0000 UTC" firstStartedPulling="2025-11-29 07:25:07.102374737 +0000 UTC m=+1145.992735830" lastFinishedPulling="2025-11-29 07:25:19.973403274 +0000 UTC m=+1158.863764417" observedRunningTime="2025-11-29 07:25:21.076290306 +0000 UTC m=+1159.966651409" watchObservedRunningTime="2025-11-29 07:25:21.083270409 +0000 UTC m=+1159.973631512" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.071835 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"a523877076a8ff7beab7a547c4ae5ed25c77c5065c0c7e4637746860e05f581a"} Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.071915 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"fd6a940fecd1a005f8644451f9be9749503e4ccee07d8efd4e6a3ab8202d88ee"} Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.071934 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"92ea3527892cccce17078b2118a19079b99a6c30b5d767605efc731321f07726"} Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.071949 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"739c0608-5471-42a6-b062-4355cd1894a0","Type":"ContainerStarted","Data":"d4e394d0998879c2f2479f43bcbea3c16d9673727c64f88b168a3e57557cba82"} Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.125777 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.485915456 podStartE2EDuration="53.125749356s" podCreationTimestamp="2025-11-29 07:24:29 +0000 UTC" firstStartedPulling="2025-11-29 07:25:03.33310079 +0000 UTC m=+1142.223461893" lastFinishedPulling="2025-11-29 07:25:19.97293469 +0000 UTC m=+1158.863295793" observedRunningTime="2025-11-29 07:25:22.116184646 +0000 UTC m=+1161.006545739" watchObservedRunningTime="2025-11-29 07:25:22.125749356 +0000 UTC m=+1161.016110499" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.466954 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-mbkd6"] Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.467326 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" podUID="9b038345-8fed-4ada-844d-92ac4791d91b" containerName="dnsmasq-dns" containerID="cri-o://48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1" gracePeriod=10 Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.469835 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.501398 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rph6r"] Nov 29 07:25:22 crc kubenswrapper[4731]: E1129 07:25:22.502104 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9abe846-7302-4ea1-8423-bc1a2e81d051" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502137 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9abe846-7302-4ea1-8423-bc1a2e81d051" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: E1129 07:25:22.502154 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456fff3a-5ed5-4def-b25d-3923d97a3577" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502164 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="456fff3a-5ed5-4def-b25d-3923d97a3577" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: E1129 07:25:22.502185 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d75320ff-8458-4ed0-977c-46e972527687" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502194 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75320ff-8458-4ed0-977c-46e972527687" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: E1129 07:25:22.502213 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450170e3-d7cb-4283-bae9-3350a8558f66" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502221 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="450170e3-d7cb-4283-bae9-3350a8558f66" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: E1129 07:25:22.502234 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b04cdb0-e1e8-4807-8fd3-6f2086497c72" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502241 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b04cdb0-e1e8-4807-8fd3-6f2086497c72" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: E1129 07:25:22.502259 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94728e01-e829-4d10-9311-defe6cd10ff9" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502269 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="94728e01-e829-4d10-9311-defe6cd10ff9" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502535 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b04cdb0-e1e8-4807-8fd3-6f2086497c72" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502572 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="450170e3-d7cb-4283-bae9-3350a8558f66" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502605 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="94728e01-e829-4d10-9311-defe6cd10ff9" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502623 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d75320ff-8458-4ed0-977c-46e972527687" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502642 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9abe846-7302-4ea1-8423-bc1a2e81d051" containerName="mariadb-account-create-update" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.502650 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="456fff3a-5ed5-4def-b25d-3923d97a3577" containerName="mariadb-database-create" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.504073 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.510101 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.523669 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rph6r"] Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.531617 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-config\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.531706 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wqm\" (UniqueName: \"kubernetes.io/projected/6db0f464-980b-441d-aa98-fcddc7d4fd49-kube-api-access-n9wqm\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.531732 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.531761 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.531781 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.531848 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.633425 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.633965 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-config\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.634042 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wqm\" (UniqueName: \"kubernetes.io/projected/6db0f464-980b-441d-aa98-fcddc7d4fd49-kube-api-access-n9wqm\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.634074 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.634114 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.634145 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.635314 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-config\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.635541 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.635681 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.638453 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.640914 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.657911 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wqm\" (UniqueName: \"kubernetes.io/projected/6db0f464-980b-441d-aa98-fcddc7d4fd49-kube-api-access-n9wqm\") pod \"dnsmasq-dns-5f59b8f679-rph6r\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.836585 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:22 crc kubenswrapper[4731]: I1129 07:25:22.981219 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.043117 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-nb\") pod \"9b038345-8fed-4ada-844d-92ac4791d91b\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.043231 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-config\") pod \"9b038345-8fed-4ada-844d-92ac4791d91b\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.043477 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-dns-svc\") pod \"9b038345-8fed-4ada-844d-92ac4791d91b\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.044102 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-sb\") pod \"9b038345-8fed-4ada-844d-92ac4791d91b\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.044186 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljmgb\" (UniqueName: \"kubernetes.io/projected/9b038345-8fed-4ada-844d-92ac4791d91b-kube-api-access-ljmgb\") pod \"9b038345-8fed-4ada-844d-92ac4791d91b\" (UID: \"9b038345-8fed-4ada-844d-92ac4791d91b\") " Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.060056 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b038345-8fed-4ada-844d-92ac4791d91b-kube-api-access-ljmgb" (OuterVolumeSpecName: "kube-api-access-ljmgb") pod "9b038345-8fed-4ada-844d-92ac4791d91b" (UID: "9b038345-8fed-4ada-844d-92ac4791d91b"). InnerVolumeSpecName "kube-api-access-ljmgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.108626 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b038345-8fed-4ada-844d-92ac4791d91b" (UID: "9b038345-8fed-4ada-844d-92ac4791d91b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.110288 4731 generic.go:334] "Generic (PLEG): container finished" podID="9b038345-8fed-4ada-844d-92ac4791d91b" containerID="48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1" exitCode=0 Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.112243 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.112912 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" event={"ID":"9b038345-8fed-4ada-844d-92ac4791d91b","Type":"ContainerDied","Data":"48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1"} Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.112957 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-mbkd6" event={"ID":"9b038345-8fed-4ada-844d-92ac4791d91b","Type":"ContainerDied","Data":"c3d27d83ebbe71d3b3139738476b061d2159fd0e8a91d73edc26f678d6e0e026"} Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.112979 4731 scope.go:117] "RemoveContainer" containerID="48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.117764 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-config" (OuterVolumeSpecName: "config") pod "9b038345-8fed-4ada-844d-92ac4791d91b" (UID: "9b038345-8fed-4ada-844d-92ac4791d91b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.124562 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b038345-8fed-4ada-844d-92ac4791d91b" (UID: "9b038345-8fed-4ada-844d-92ac4791d91b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.142148 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b038345-8fed-4ada-844d-92ac4791d91b" (UID: "9b038345-8fed-4ada-844d-92ac4791d91b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.152049 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljmgb\" (UniqueName: \"kubernetes.io/projected/9b038345-8fed-4ada-844d-92ac4791d91b-kube-api-access-ljmgb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.152094 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.152117 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.152131 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.152142 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b038345-8fed-4ada-844d-92ac4791d91b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.157355 4731 scope.go:117] "RemoveContainer" containerID="99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.187473 4731 scope.go:117] "RemoveContainer" containerID="48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1" Nov 29 07:25:23 crc kubenswrapper[4731]: E1129 07:25:23.189102 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1\": container with ID starting with 48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1 not found: ID does not exist" containerID="48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.189180 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1"} err="failed to get container status \"48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1\": rpc error: code = NotFound desc = could not find container \"48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1\": container with ID starting with 48cc413121e404b001eea07afe41c03a2b9e1e0b7ade90461e953410372c09b1 not found: ID does not exist" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.189231 4731 scope.go:117] "RemoveContainer" containerID="99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd" Nov 29 07:25:23 crc kubenswrapper[4731]: E1129 07:25:23.189822 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd\": container with ID starting with 99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd not found: ID does not exist" containerID="99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.189864 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd"} err="failed to get container status \"99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd\": rpc error: code = NotFound desc = could not find container \"99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd\": container with ID starting with 99f5eca0cb414102d69458943898717e4102ca300092bf41647555f62a1823dd not found: ID does not exist" Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.341696 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rph6r"] Nov 29 07:25:23 crc kubenswrapper[4731]: W1129 07:25:23.347552 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6db0f464_980b_441d_aa98_fcddc7d4fd49.slice/crio-649f8f960789a3c3e336ebc706e7a1dd660053245fa31f2eb0b3c907e3a50e39 WatchSource:0}: Error finding container 649f8f960789a3c3e336ebc706e7a1dd660053245fa31f2eb0b3c907e3a50e39: Status 404 returned error can't find the container with id 649f8f960789a3c3e336ebc706e7a1dd660053245fa31f2eb0b3c907e3a50e39 Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.458997 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-mbkd6"] Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.468047 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-mbkd6"] Nov 29 07:25:23 crc kubenswrapper[4731]: I1129 07:25:23.818497 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b038345-8fed-4ada-844d-92ac4791d91b" path="/var/lib/kubelet/pods/9b038345-8fed-4ada-844d-92ac4791d91b/volumes" Nov 29 07:25:24 crc kubenswrapper[4731]: I1129 07:25:24.121067 4731 generic.go:334] "Generic (PLEG): container finished" podID="6db0f464-980b-441d-aa98-fcddc7d4fd49" containerID="2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057" exitCode=0 Nov 29 07:25:24 crc kubenswrapper[4731]: I1129 07:25:24.121179 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" event={"ID":"6db0f464-980b-441d-aa98-fcddc7d4fd49","Type":"ContainerDied","Data":"2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057"} Nov 29 07:25:24 crc kubenswrapper[4731]: I1129 07:25:24.121282 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" event={"ID":"6db0f464-980b-441d-aa98-fcddc7d4fd49","Type":"ContainerStarted","Data":"649f8f960789a3c3e336ebc706e7a1dd660053245fa31f2eb0b3c907e3a50e39"} Nov 29 07:25:24 crc kubenswrapper[4731]: I1129 07:25:24.124047 4731 generic.go:334] "Generic (PLEG): container finished" podID="dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111" containerID="6589041cde6b21ef09af6738cc65ac22979f13b42abafd743cfd680e5ed860b9" exitCode=0 Nov 29 07:25:24 crc kubenswrapper[4731]: I1129 07:25:24.124094 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nft5q" event={"ID":"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111","Type":"ContainerDied","Data":"6589041cde6b21ef09af6738cc65ac22979f13b42abafd743cfd680e5ed860b9"} Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.137768 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" event={"ID":"6db0f464-980b-441d-aa98-fcddc7d4fd49","Type":"ContainerStarted","Data":"a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5"} Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.162734 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" podStartSLOduration=3.162719125 podStartE2EDuration="3.162719125s" podCreationTimestamp="2025-11-29 07:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:25.159166271 +0000 UTC m=+1164.049527384" watchObservedRunningTime="2025-11-29 07:25:25.162719125 +0000 UTC m=+1164.053080228" Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.490832 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.605246 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-config-data\") pod \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.605395 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j8sh\" (UniqueName: \"kubernetes.io/projected/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-kube-api-access-4j8sh\") pod \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.605510 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-combined-ca-bundle\") pod \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\" (UID: \"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111\") " Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.624814 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-kube-api-access-4j8sh" (OuterVolumeSpecName: "kube-api-access-4j8sh") pod "dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111" (UID: "dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111"). InnerVolumeSpecName "kube-api-access-4j8sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.638837 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111" (UID: "dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.664865 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-config-data" (OuterVolumeSpecName: "config-data") pod "dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111" (UID: "dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.707449 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.707500 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j8sh\" (UniqueName: \"kubernetes.io/projected/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-kube-api-access-4j8sh\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:25 crc kubenswrapper[4731]: I1129 07:25:25.707517 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.151853 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nft5q" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.152828 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nft5q" event={"ID":"dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111","Type":"ContainerDied","Data":"a0fec04919f7387e22f79c3c4ec4745e9591014c2175e6cc6f943d0fa608a854"} Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.152874 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0fec04919f7387e22f79c3c4ec4745e9591014c2175e6cc6f943d0fa608a854" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.152896 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.474290 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kb6sz"] Nov 29 07:25:26 crc kubenswrapper[4731]: E1129 07:25:26.475098 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b038345-8fed-4ada-844d-92ac4791d91b" containerName="init" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.475118 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b038345-8fed-4ada-844d-92ac4791d91b" containerName="init" Nov 29 07:25:26 crc kubenswrapper[4731]: E1129 07:25:26.475139 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b038345-8fed-4ada-844d-92ac4791d91b" containerName="dnsmasq-dns" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.475146 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b038345-8fed-4ada-844d-92ac4791d91b" containerName="dnsmasq-dns" Nov 29 07:25:26 crc kubenswrapper[4731]: E1129 07:25:26.475163 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111" containerName="keystone-db-sync" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.475170 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111" containerName="keystone-db-sync" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.475430 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111" containerName="keystone-db-sync" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.475447 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b038345-8fed-4ada-844d-92ac4791d91b" containerName="dnsmasq-dns" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.476064 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.480190 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.480564 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.480731 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.480976 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.481117 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4wnsc" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.509223 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kb6sz"] Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.575525 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rph6r"] Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.611073 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-6crwl"] Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.612998 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.623393 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-fernet-keys\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.623464 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-config-data\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.623496 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-combined-ca-bundle\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.623535 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-credential-keys\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.623591 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-scripts\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.623635 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j9fz\" (UniqueName: \"kubernetes.io/projected/38f8e9cf-be31-447d-9e2f-0efad4bc3703-kube-api-access-8j9fz\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.625333 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-6crwl"] Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.713202 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-zcx9z"] Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.714715 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.718235 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.718481 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9dbfp" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.719304 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.725403 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.725527 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.725619 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-config\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.725699 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-config-data\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.725753 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-combined-ca-bundle\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.725797 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-credential-keys\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.725863 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-scripts\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.725954 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j9fz\" (UniqueName: \"kubernetes.io/projected/38f8e9cf-be31-447d-9e2f-0efad4bc3703-kube-api-access-8j9fz\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.726001 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.726091 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.726140 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8h7g\" (UniqueName: \"kubernetes.io/projected/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-kube-api-access-b8h7g\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.726220 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-fernet-keys\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.740729 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-fernet-keys\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.744754 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zcx9z"] Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.756138 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-config-data\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.761080 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-combined-ca-bundle\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.764793 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-credential-keys\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.768389 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-scripts\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.773912 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j9fz\" (UniqueName: \"kubernetes.io/projected/38f8e9cf-be31-447d-9e2f-0efad4bc3703-kube-api-access-8j9fz\") pod \"keystone-bootstrap-kb6sz\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.811188 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.831655 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.831741 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-config-data\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.831780 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.831814 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8h7g\" (UniqueName: \"kubernetes.io/projected/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-kube-api-access-b8h7g\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.831864 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9fs\" (UniqueName: \"kubernetes.io/projected/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-kube-api-access-qh9fs\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.831895 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.831955 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.831985 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-config\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.832064 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-combined-ca-bundle\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.832093 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-db-sync-config-data\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.832119 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-scripts\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.832213 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-etc-machine-id\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.833442 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.834039 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.834700 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.835727 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.839995 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-config\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.874790 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-676b8dc849-9dqb8"] Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.876369 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.888299 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.888521 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xzlfq" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.888804 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.889148 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.900639 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8h7g\" (UniqueName: \"kubernetes.io/projected/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-kube-api-access-b8h7g\") pod \"dnsmasq-dns-bbf5cc879-6crwl\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.921379 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qjjnr"] Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.922769 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.934603 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xmjfp" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.934957 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.935819 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.936022 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-combined-ca-bundle\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.936066 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-db-sync-config-data\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.936097 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-scripts\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.936125 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-etc-machine-id\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.936180 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-config-data\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.936241 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh9fs\" (UniqueName: \"kubernetes.io/projected/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-kube-api-access-qh9fs\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.937265 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-etc-machine-id\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.968180 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-combined-ca-bundle\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.973407 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-db-sync-config-data\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.973564 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.974377 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-scripts\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.979456 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-config-data\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:26 crc kubenswrapper[4731]: I1129 07:25:26.991337 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh9fs\" (UniqueName: \"kubernetes.io/projected/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-kube-api-access-qh9fs\") pod \"cinder-db-sync-zcx9z\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.042884 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-676b8dc849-9dqb8"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.052145 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-horizon-secret-key\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.052245 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-scripts\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.052274 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-config\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.052300 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xbf\" (UniqueName: \"kubernetes.io/projected/2d843330-ffae-4bc9-a8b3-c2df891a1aae-kube-api-access-b8xbf\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.052339 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-combined-ca-bundle\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.052381 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-config-data\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.052404 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-logs\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.052426 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntf4g\" (UniqueName: \"kubernetes.io/projected/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-kube-api-access-ntf4g\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.116420 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qjjnr"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.131172 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.162697 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-config-data\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.162794 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-logs\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.162830 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntf4g\" (UniqueName: \"kubernetes.io/projected/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-kube-api-access-ntf4g\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.162927 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-horizon-secret-key\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.162993 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-scripts\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.163020 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-config\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.163047 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xbf\" (UniqueName: \"kubernetes.io/projected/2d843330-ffae-4bc9-a8b3-c2df891a1aae-kube-api-access-b8xbf\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.163097 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-combined-ca-bundle\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.179655 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-config-data\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.180305 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-scripts\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.187873 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-logs\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.204535 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.212394 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.224330 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8xbf\" (UniqueName: \"kubernetes.io/projected/2d843330-ffae-4bc9-a8b3-c2df891a1aae-kube-api-access-b8xbf\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.231678 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-combined-ca-bundle\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.232205 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.234111 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.234415 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-config\") pod \"neutron-db-sync-qjjnr\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.259274 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntf4g\" (UniqueName: \"kubernetes.io/projected/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-kube-api-access-ntf4g\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.260811 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-horizon-secret-key\") pod \"horizon-676b8dc849-9dqb8\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.282173 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.307412 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fbk9s"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.311209 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.327847 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.328191 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-txw5r" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.346066 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.374664 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-config-data\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.374711 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-run-httpd\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.374764 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhhpr\" (UniqueName: \"kubernetes.io/projected/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-kube-api-access-qhhpr\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.374808 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-log-httpd\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.374842 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.374869 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.374974 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-scripts\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.377327 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fbk9s"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.416621 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.451841 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-6crwl"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.479482 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-log-httpd\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.479725 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.479841 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.483809 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-db-sync-config-data\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.484106 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-scripts\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.484299 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-config-data\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.484429 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-run-httpd\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.484767 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-combined-ca-bundle\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.484902 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhhpr\" (UniqueName: \"kubernetes.io/projected/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-kube-api-access-qhhpr\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.492725 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5q5\" (UniqueName: \"kubernetes.io/projected/a4589d89-a761-4510-bd4c-55a6a3e620c4-kube-api-access-dk5q5\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.498817 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-log-httpd\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.509203 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-config-data\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.513448 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-run-httpd\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.520479 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-scripts\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.523652 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.526518 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.528659 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c4c555987-wx7mp"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.530760 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.538918 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhhpr\" (UniqueName: \"kubernetes.io/projected/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-kube-api-access-qhhpr\") pod \"ceilometer-0\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.558780 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-x6bxr"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.560195 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.571880 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.575640 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.575894 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mpvkt" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.601956 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-combined-ca-bundle\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.602009 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5q5\" (UniqueName: \"kubernetes.io/projected/a4589d89-a761-4510-bd4c-55a6a3e620c4-kube-api-access-dk5q5\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.602082 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-db-sync-config-data\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.612828 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.622591 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-combined-ca-bundle\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.623046 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-db-sync-config-data\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.645666 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4c555987-wx7mp"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.652818 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5q5\" (UniqueName: \"kubernetes.io/projected/a4589d89-a761-4510-bd4c-55a6a3e620c4-kube-api-access-dk5q5\") pod \"barbican-db-sync-fbk9s\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.680134 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.699997 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-x6bxr"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703237 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-config-data\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703272 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd5r7\" (UniqueName: \"kubernetes.io/projected/13bcd648-c6e2-4b6e-a660-da2f47f09a06-kube-api-access-hd5r7\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703306 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bcd648-c6e2-4b6e-a660-da2f47f09a06-logs\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703332 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtrhq\" (UniqueName: \"kubernetes.io/projected/1f57a857-cd62-458f-9a5f-1451ff9d5628-kube-api-access-rtrhq\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703394 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-scripts\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703442 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f57a857-cd62-458f-9a5f-1451ff9d5628-horizon-secret-key\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703468 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-combined-ca-bundle\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703487 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-config-data\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703521 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f57a857-cd62-458f-9a5f-1451ff9d5628-logs\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.703540 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-scripts\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.762037 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hvz7k"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.764001 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.804831 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgtk\" (UniqueName: \"kubernetes.io/projected/895a9751-f534-47b7-8e60-f10a608dd46e-kube-api-access-mhgtk\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.804921 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-scripts\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.804953 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.804982 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805055 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f57a857-cd62-458f-9a5f-1451ff9d5628-horizon-secret-key\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805088 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805118 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-combined-ca-bundle\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805146 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-config-data\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805192 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f57a857-cd62-458f-9a5f-1451ff9d5628-logs\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805220 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-scripts\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805256 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-config-data\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805276 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd5r7\" (UniqueName: \"kubernetes.io/projected/13bcd648-c6e2-4b6e-a660-da2f47f09a06-kube-api-access-hd5r7\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805304 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bcd648-c6e2-4b6e-a660-da2f47f09a06-logs\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805336 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtrhq\" (UniqueName: \"kubernetes.io/projected/1f57a857-cd62-458f-9a5f-1451ff9d5628-kube-api-access-rtrhq\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805385 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.805422 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-config\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.811259 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.812705 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f57a857-cd62-458f-9a5f-1451ff9d5628-logs\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.813430 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.814653 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-scripts\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.815442 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bcd648-c6e2-4b6e-a660-da2f47f09a06-logs\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.818119 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-config-data\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.826465 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n6x8c" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.826746 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.826902 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.827070 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.840321 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-scripts\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.846275 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f57a857-cd62-458f-9a5f-1451ff9d5628-horizon-secret-key\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.851488 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-config-data\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.852024 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-combined-ca-bundle\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.870976 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hvz7k"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.906803 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.906857 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-config\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.906883 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.906913 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgtk\" (UniqueName: \"kubernetes.io/projected/895a9751-f534-47b7-8e60-f10a608dd46e-kube-api-access-mhgtk\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.906932 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.906958 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.906980 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.907021 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-logs\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.907064 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.907130 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.907165 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n27fm\" (UniqueName: \"kubernetes.io/projected/dabc1a57-987c-452a-bb15-b26368e6cab2-kube-api-access-n27fm\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.907193 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.907227 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.907250 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.908512 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd5r7\" (UniqueName: \"kubernetes.io/projected/13bcd648-c6e2-4b6e-a660-da2f47f09a06-kube-api-access-hd5r7\") pod \"placement-db-sync-x6bxr\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.910270 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtrhq\" (UniqueName: \"kubernetes.io/projected/1f57a857-cd62-458f-9a5f-1451ff9d5628-kube-api-access-rtrhq\") pod \"horizon-7c4c555987-wx7mp\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.920552 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.920612 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.926997 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-config\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.927440 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.927616 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.994585 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:25:27 crc kubenswrapper[4731]: I1129 07:25:27.996168 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgtk\" (UniqueName: \"kubernetes.io/projected/895a9751-f534-47b7-8e60-f10a608dd46e-kube-api-access-mhgtk\") pod \"dnsmasq-dns-56df8fb6b7-hvz7k\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.009053 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.009099 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n27fm\" (UniqueName: \"kubernetes.io/projected/dabc1a57-987c-452a-bb15-b26368e6cab2-kube-api-access-n27fm\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.009128 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.009160 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.009181 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.009255 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.009282 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.009328 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-logs\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.010110 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-logs\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.011161 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.012523 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.054945 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.055034 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.061539 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.062290 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.071546 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.072136 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n27fm\" (UniqueName: \"kubernetes.io/projected/dabc1a57-987c-452a-bb15-b26368e6cab2-kube-api-access-n27fm\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.072304 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.079472 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.081212 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.087696 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.092389 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.193626 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kb6sz"] Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.194423 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.203008 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x6bxr" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.215670 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.215731 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.215760 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.215836 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.215862 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.215889 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.215925 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcw6z\" (UniqueName: \"kubernetes.io/projected/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-kube-api-access-tcw6z\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.216009 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.272417 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zcx9z" event={"ID":"9af027cc-cbd4-4f3a-ad25-2ef5b126d590","Type":"ContainerStarted","Data":"3d88a9039cb5d56b1c701e6b7cef4e3898d10233c100b9cc94ca8f66f68cb19c"} Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.275129 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kb6sz" event={"ID":"38f8e9cf-be31-447d-9e2f-0efad4bc3703","Type":"ContainerStarted","Data":"259082b715a29211f8f9408a3e5123b67c7c6fdc87db56f7cf151e2caf20ff02"} Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.275870 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" podUID="6db0f464-980b-441d-aa98-fcddc7d4fd49" containerName="dnsmasq-dns" containerID="cri-o://a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5" gracePeriod=10 Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.320511 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.320625 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.320663 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.320776 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.320807 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.320847 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.321042 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcw6z\" (UniqueName: \"kubernetes.io/projected/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-kube-api-access-tcw6z\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.321893 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.325505 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.326069 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.328632 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zcx9z"] Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.330374 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.331007 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.331102 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.352454 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcw6z\" (UniqueName: \"kubernetes.io/projected/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-kube-api-access-tcw6z\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.356798 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.362874 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.363942 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.430483 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.466168 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.488668 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.746932 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qjjnr"] Nov 29 07:25:28 crc kubenswrapper[4731]: I1129 07:25:28.770173 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-676b8dc849-9dqb8"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.151291 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fbk9s"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.184687 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-6crwl"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.209644 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.219880 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:29 crc kubenswrapper[4731]: W1129 07:25:29.242714 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd06ee632_6fed_4e8b_a8e1_db2f0d542f97.slice/crio-8cb7aaeb042b7dd4bb6e77d3384ca3aa2c5509e97f00b15b9b92749c98e2159b WatchSource:0}: Error finding container 8cb7aaeb042b7dd4bb6e77d3384ca3aa2c5509e97f00b15b9b92749c98e2159b: Status 404 returned error can't find the container with id 8cb7aaeb042b7dd4bb6e77d3384ca3aa2c5509e97f00b15b9b92749c98e2159b Nov 29 07:25:29 crc kubenswrapper[4731]: W1129 07:25:29.250221 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93f84d51_daf8_4c30_ba2c_e5d8aff3432c.slice/crio-67ddea95ca4c467d78bbe82656f436a23274e9ce491484f0cca64bb254d3ceb9 WatchSource:0}: Error finding container 67ddea95ca4c467d78bbe82656f436a23274e9ce491484f0cca64bb254d3ceb9: Status 404 returned error can't find the container with id 67ddea95ca4c467d78bbe82656f436a23274e9ce491484f0cca64bb254d3ceb9 Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.306839 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676b8dc849-9dqb8" event={"ID":"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f","Type":"ContainerStarted","Data":"c5d5f11cf7c7d40f65476ebc829b3617bd989fd19db46f5947a9aa057c807875"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.320980 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fbk9s" event={"ID":"a4589d89-a761-4510-bd4c-55a6a3e620c4","Type":"ContainerStarted","Data":"61c4b4d82607b50c5ed4236f757d643e8a941cf75f02d4770f893cff31661f79"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.324672 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerStarted","Data":"67ddea95ca4c467d78bbe82656f436a23274e9ce491484f0cca64bb254d3ceb9"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.330885 4731 generic.go:334] "Generic (PLEG): container finished" podID="6db0f464-980b-441d-aa98-fcddc7d4fd49" containerID="a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5" exitCode=0 Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.331014 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" event={"ID":"6db0f464-980b-441d-aa98-fcddc7d4fd49","Type":"ContainerDied","Data":"a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.331078 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" event={"ID":"6db0f464-980b-441d-aa98-fcddc7d4fd49","Type":"ContainerDied","Data":"649f8f960789a3c3e336ebc706e7a1dd660053245fa31f2eb0b3c907e3a50e39"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.331098 4731 scope.go:117] "RemoveContainer" containerID="a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.330984 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rph6r" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.333831 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kb6sz" event={"ID":"38f8e9cf-be31-447d-9e2f-0efad4bc3703","Type":"ContainerStarted","Data":"d6c338e540d22684df8ae1e7ddc644d39bff3e11e8f01edf5d0aca9da74af4e0"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.338050 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" event={"ID":"d06ee632-6fed-4e8b-a8e1-db2f0d542f97","Type":"ContainerStarted","Data":"8cb7aaeb042b7dd4bb6e77d3384ca3aa2c5509e97f00b15b9b92749c98e2159b"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.340304 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjjnr" event={"ID":"2d843330-ffae-4bc9-a8b3-c2df891a1aae","Type":"ContainerStarted","Data":"35baaa7729762d17b4d7d6f2de4d3968e88ea07e8ec8701ab4a49abef88ae6f3"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.340333 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjjnr" event={"ID":"2d843330-ffae-4bc9-a8b3-c2df891a1aae","Type":"ContainerStarted","Data":"5dd0997503a6a7b4b9561a35602e3d122669eb1e4c78578724bc4a3f8110fe5d"} Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.370306 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kb6sz" podStartSLOduration=3.370271673 podStartE2EDuration="3.370271673s" podCreationTimestamp="2025-11-29 07:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:29.363835875 +0000 UTC m=+1168.254196988" watchObservedRunningTime="2025-11-29 07:25:29.370271673 +0000 UTC m=+1168.260632776" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.382732 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9wqm\" (UniqueName: \"kubernetes.io/projected/6db0f464-980b-441d-aa98-fcddc7d4fd49-kube-api-access-n9wqm\") pod \"6db0f464-980b-441d-aa98-fcddc7d4fd49\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.382950 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-swift-storage-0\") pod \"6db0f464-980b-441d-aa98-fcddc7d4fd49\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.383032 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-nb\") pod \"6db0f464-980b-441d-aa98-fcddc7d4fd49\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.383085 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-sb\") pod \"6db0f464-980b-441d-aa98-fcddc7d4fd49\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.383182 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-config\") pod \"6db0f464-980b-441d-aa98-fcddc7d4fd49\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.383398 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-svc\") pod \"6db0f464-980b-441d-aa98-fcddc7d4fd49\" (UID: \"6db0f464-980b-441d-aa98-fcddc7d4fd49\") " Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.386695 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qjjnr" podStartSLOduration=3.386669542 podStartE2EDuration="3.386669542s" podCreationTimestamp="2025-11-29 07:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:29.384187009 +0000 UTC m=+1168.274548112" watchObservedRunningTime="2025-11-29 07:25:29.386669542 +0000 UTC m=+1168.277030645" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.404737 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db0f464-980b-441d-aa98-fcddc7d4fd49-kube-api-access-n9wqm" (OuterVolumeSpecName: "kube-api-access-n9wqm") pod "6db0f464-980b-441d-aa98-fcddc7d4fd49" (UID: "6db0f464-980b-441d-aa98-fcddc7d4fd49"). InnerVolumeSpecName "kube-api-access-n9wqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.431128 4731 scope.go:117] "RemoveContainer" containerID="2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.459913 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6db0f464-980b-441d-aa98-fcddc7d4fd49" (UID: "6db0f464-980b-441d-aa98-fcddc7d4fd49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.502553 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.507730 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9wqm\" (UniqueName: \"kubernetes.io/projected/6db0f464-980b-441d-aa98-fcddc7d4fd49-kube-api-access-n9wqm\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.509067 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6db0f464-980b-441d-aa98-fcddc7d4fd49" (UID: "6db0f464-980b-441d-aa98-fcddc7d4fd49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.511150 4731 scope.go:117] "RemoveContainer" containerID="a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5" Nov 29 07:25:29 crc kubenswrapper[4731]: E1129 07:25:29.511978 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5\": container with ID starting with a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5 not found: ID does not exist" containerID="a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.512037 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5"} err="failed to get container status \"a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5\": rpc error: code = NotFound desc = could not find container \"a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5\": container with ID starting with a09eb9953f8560b2bad4a5fbeecd4030ecaf3bbcf7c5f4207d0759b7b75e4ef5 not found: ID does not exist" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.512071 4731 scope.go:117] "RemoveContainer" containerID="2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057" Nov 29 07:25:29 crc kubenswrapper[4731]: E1129 07:25:29.512812 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057\": container with ID starting with 2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057 not found: ID does not exist" containerID="2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.512874 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057"} err="failed to get container status \"2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057\": rpc error: code = NotFound desc = could not find container \"2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057\": container with ID starting with 2d00b79e0b48dfaa20703e6519ad939cb910fdcb56e980753831664f13258057 not found: ID does not exist" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.542106 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-config" (OuterVolumeSpecName: "config") pod "6db0f464-980b-441d-aa98-fcddc7d4fd49" (UID: "6db0f464-980b-441d-aa98-fcddc7d4fd49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.595652 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hvz7k"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.595673 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6db0f464-980b-441d-aa98-fcddc7d4fd49" (UID: "6db0f464-980b-441d-aa98-fcddc7d4fd49"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.610075 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.610113 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.610126 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.636658 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4c555987-wx7mp"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.688329 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-x6bxr"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.702348 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.716271 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-676b8dc849-9dqb8"] Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.796337 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8b8b69b5-p8jtq"] Nov 29 07:25:29 crc kubenswrapper[4731]: E1129 07:25:29.817156 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db0f464-980b-441d-aa98-fcddc7d4fd49" containerName="init" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.817195 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db0f464-980b-441d-aa98-fcddc7d4fd49" containerName="init" Nov 29 07:25:29 crc kubenswrapper[4731]: E1129 07:25:29.817281 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db0f464-980b-441d-aa98-fcddc7d4fd49" containerName="dnsmasq-dns" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.817291 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db0f464-980b-441d-aa98-fcddc7d4fd49" containerName="dnsmasq-dns" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.818399 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db0f464-980b-441d-aa98-fcddc7d4fd49" containerName="dnsmasq-dns" Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.837386 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:29 crc kubenswrapper[4731]: W1129 07:25:29.871636 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13bcd648_c6e2_4b6e_a660_da2f47f09a06.slice/crio-9f963e923128ed454e90c2ad79112d4e1cff356ffce67859ea06af0195952097 WatchSource:0}: Error finding container 9f963e923128ed454e90c2ad79112d4e1cff356ffce67859ea06af0195952097: Status 404 returned error can't find the container with id 9f963e923128ed454e90c2ad79112d4e1cff356ffce67859ea06af0195952097 Nov 29 07:25:29 crc kubenswrapper[4731]: I1129 07:25:29.878288 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6db0f464-980b-441d-aa98-fcddc7d4fd49" (UID: "6db0f464-980b-441d-aa98-fcddc7d4fd49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.009227 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4169afaa-8657-4e8c-bac2-fd640f9ed116-horizon-secret-key\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.009483 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-config-data\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.009684 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-scripts\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.009842 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslbx\" (UniqueName: \"kubernetes.io/projected/4169afaa-8657-4e8c-bac2-fd640f9ed116-kube-api-access-qslbx\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.009901 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4169afaa-8657-4e8c-bac2-fd640f9ed116-logs\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.010293 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6db0f464-980b-441d-aa98-fcddc7d4fd49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.112694 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qslbx\" (UniqueName: \"kubernetes.io/projected/4169afaa-8657-4e8c-bac2-fd640f9ed116-kube-api-access-qslbx\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.113444 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4169afaa-8657-4e8c-bac2-fd640f9ed116-logs\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.113512 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4169afaa-8657-4e8c-bac2-fd640f9ed116-horizon-secret-key\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.113618 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-config-data\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.113690 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-scripts\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.115248 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4169afaa-8657-4e8c-bac2-fd640f9ed116-logs\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.117848 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-scripts\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.126970 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4169afaa-8657-4e8c-bac2-fd640f9ed116-horizon-secret-key\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.128422 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-config-data\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.141219 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8b8b69b5-p8jtq"] Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.141291 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.141427 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.141442 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.141462 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.173467 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qslbx\" (UniqueName: \"kubernetes.io/projected/4169afaa-8657-4e8c-bac2-fd640f9ed116-kube-api-access-qslbx\") pod \"horizon-8b8b69b5-p8jtq\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.182804 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.205719 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rph6r"] Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.221147 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rph6r"] Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.510994 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f","Type":"ContainerStarted","Data":"d1edfc5259d3c6d11de248be0dca39c96070cfe36e6f0ea52db3b70c7ac76b6a"} Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.529764 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4c555987-wx7mp" event={"ID":"1f57a857-cd62-458f-9a5f-1451ff9d5628","Type":"ContainerStarted","Data":"beb72452e189a402f4e8f6a636a19eab75bf10eebfb0bd0a0e16dfa809640071"} Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.532921 4731 generic.go:334] "Generic (PLEG): container finished" podID="d06ee632-6fed-4e8b-a8e1-db2f0d542f97" containerID="28ed214e795c362d28125d7e999e26898ae1c3c96ec3e63131bb5dbd74c4d18b" exitCode=0 Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.532982 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" event={"ID":"d06ee632-6fed-4e8b-a8e1-db2f0d542f97","Type":"ContainerDied","Data":"28ed214e795c362d28125d7e999e26898ae1c3c96ec3e63131bb5dbd74c4d18b"} Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.542083 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" event={"ID":"895a9751-f534-47b7-8e60-f10a608dd46e","Type":"ContainerStarted","Data":"4d939dfa0b549cb5f486f0331c2da87cfe3c72a05046c9fa4250c851d96fb539"} Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.563133 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dabc1a57-987c-452a-bb15-b26368e6cab2","Type":"ContainerStarted","Data":"abcacabf27ed11d77ef7b6c7517bd2b3c96c6b73191775b25aea804ebf93fbb9"} Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.587664 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x6bxr" event={"ID":"13bcd648-c6e2-4b6e-a660-da2f47f09a06","Type":"ContainerStarted","Data":"9f963e923128ed454e90c2ad79112d4e1cff356ffce67859ea06af0195952097"} Nov 29 07:25:30 crc kubenswrapper[4731]: I1129 07:25:30.911937 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8b8b69b5-p8jtq"] Nov 29 07:25:30 crc kubenswrapper[4731]: W1129 07:25:30.938472 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4169afaa_8657_4e8c_bac2_fd640f9ed116.slice/crio-426694ab19ceecd7dec38bef157963596ad56076fb7a3d53b274070cbb4b1e97 WatchSource:0}: Error finding container 426694ab19ceecd7dec38bef157963596ad56076fb7a3d53b274070cbb4b1e97: Status 404 returned error can't find the container with id 426694ab19ceecd7dec38bef157963596ad56076fb7a3d53b274070cbb4b1e97 Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.217855 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.280954 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-svc\") pod \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.281189 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-nb\") pod \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.281325 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-sb\") pod \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.281367 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-swift-storage-0\") pod \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.281421 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8h7g\" (UniqueName: \"kubernetes.io/projected/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-kube-api-access-b8h7g\") pod \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.281550 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-config\") pod \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\" (UID: \"d06ee632-6fed-4e8b-a8e1-db2f0d542f97\") " Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.298508 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-kube-api-access-b8h7g" (OuterVolumeSpecName: "kube-api-access-b8h7g") pod "d06ee632-6fed-4e8b-a8e1-db2f0d542f97" (UID: "d06ee632-6fed-4e8b-a8e1-db2f0d542f97"). InnerVolumeSpecName "kube-api-access-b8h7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.327950 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-config" (OuterVolumeSpecName: "config") pod "d06ee632-6fed-4e8b-a8e1-db2f0d542f97" (UID: "d06ee632-6fed-4e8b-a8e1-db2f0d542f97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.334315 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d06ee632-6fed-4e8b-a8e1-db2f0d542f97" (UID: "d06ee632-6fed-4e8b-a8e1-db2f0d542f97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.344535 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d06ee632-6fed-4e8b-a8e1-db2f0d542f97" (UID: "d06ee632-6fed-4e8b-a8e1-db2f0d542f97"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.345816 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d06ee632-6fed-4e8b-a8e1-db2f0d542f97" (UID: "d06ee632-6fed-4e8b-a8e1-db2f0d542f97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.347652 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d06ee632-6fed-4e8b-a8e1-db2f0d542f97" (UID: "d06ee632-6fed-4e8b-a8e1-db2f0d542f97"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.384272 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.384317 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.384334 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8h7g\" (UniqueName: \"kubernetes.io/projected/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-kube-api-access-b8h7g\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.384352 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.384364 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.384377 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d06ee632-6fed-4e8b-a8e1-db2f0d542f97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.619086 4731 generic.go:334] "Generic (PLEG): container finished" podID="895a9751-f534-47b7-8e60-f10a608dd46e" containerID="2e96fa5d1c2d853dec4e655f3e30d33b9946b56e0446032f1e5eb5eaf12a104e" exitCode=0 Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.619277 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" event={"ID":"895a9751-f534-47b7-8e60-f10a608dd46e","Type":"ContainerDied","Data":"2e96fa5d1c2d853dec4e655f3e30d33b9946b56e0446032f1e5eb5eaf12a104e"} Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.624114 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dabc1a57-987c-452a-bb15-b26368e6cab2","Type":"ContainerStarted","Data":"e9e2f9fd3156819c49c80a4f68ae47e4624468f3d69dd7c55e38812ad1cdb4a6"} Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.633212 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f","Type":"ContainerStarted","Data":"fe85d4e9a4ac80e776b6b3382275735ff1a8e2a5ffe928e4e6d968dc93108793"} Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.655557 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8b8b69b5-p8jtq" event={"ID":"4169afaa-8657-4e8c-bac2-fd640f9ed116","Type":"ContainerStarted","Data":"426694ab19ceecd7dec38bef157963596ad56076fb7a3d53b274070cbb4b1e97"} Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.665313 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" event={"ID":"d06ee632-6fed-4e8b-a8e1-db2f0d542f97","Type":"ContainerDied","Data":"8cb7aaeb042b7dd4bb6e77d3384ca3aa2c5509e97f00b15b9b92749c98e2159b"} Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.665630 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-6crwl" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.665684 4731 scope.go:117] "RemoveContainer" containerID="28ed214e795c362d28125d7e999e26898ae1c3c96ec3e63131bb5dbd74c4d18b" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.786756 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-6crwl"] Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.882555 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db0f464-980b-441d-aa98-fcddc7d4fd49" path="/var/lib/kubelet/pods/6db0f464-980b-441d-aa98-fcddc7d4fd49/volumes" Nov 29 07:25:31 crc kubenswrapper[4731]: I1129 07:25:31.884293 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-6crwl"] Nov 29 07:25:32 crc kubenswrapper[4731]: I1129 07:25:32.687419 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" event={"ID":"895a9751-f534-47b7-8e60-f10a608dd46e","Type":"ContainerStarted","Data":"fe4d2c837150a4ae0517636ae9cbc6c2e19f532dc3107d1a96b1f6ed4ee240ea"} Nov 29 07:25:32 crc kubenswrapper[4731]: I1129 07:25:32.688044 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:32 crc kubenswrapper[4731]: I1129 07:25:32.715912 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" podStartSLOduration=5.715891506 podStartE2EDuration="5.715891506s" podCreationTimestamp="2025-11-29 07:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:32.710096827 +0000 UTC m=+1171.600457930" watchObservedRunningTime="2025-11-29 07:25:32.715891506 +0000 UTC m=+1171.606252599" Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.722156 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dabc1a57-987c-452a-bb15-b26368e6cab2","Type":"ContainerStarted","Data":"c1b26afdb352e68a7b6dc66491d18c720326fa8eddfd67339f39e26398994e79"} Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.722873 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerName="glance-log" containerID="cri-o://e9e2f9fd3156819c49c80a4f68ae47e4624468f3d69dd7c55e38812ad1cdb4a6" gracePeriod=30 Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.723132 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerName="glance-httpd" containerID="cri-o://c1b26afdb352e68a7b6dc66491d18c720326fa8eddfd67339f39e26398994e79" gracePeriod=30 Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.727811 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerName="glance-log" containerID="cri-o://fe85d4e9a4ac80e776b6b3382275735ff1a8e2a5ffe928e4e6d968dc93108793" gracePeriod=30 Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.728065 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerName="glance-httpd" containerID="cri-o://cd3d1d4d3d977a785c235b96341e9713c77ff855fc1998a86804ef4ea93e3c2b" gracePeriod=30 Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.728162 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f","Type":"ContainerStarted","Data":"cd3d1d4d3d977a785c235b96341e9713c77ff855fc1998a86804ef4ea93e3c2b"} Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.795379 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.795338994 podStartE2EDuration="6.795338994s" podCreationTimestamp="2025-11-29 07:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:33.77124788 +0000 UTC m=+1172.661608983" watchObservedRunningTime="2025-11-29 07:25:33.795338994 +0000 UTC m=+1172.685700097" Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.830547 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.830510151 podStartE2EDuration="6.830510151s" podCreationTimestamp="2025-11-29 07:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:25:33.82019767 +0000 UTC m=+1172.710558793" watchObservedRunningTime="2025-11-29 07:25:33.830510151 +0000 UTC m=+1172.720871244" Nov 29 07:25:33 crc kubenswrapper[4731]: I1129 07:25:33.847585 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d06ee632-6fed-4e8b-a8e1-db2f0d542f97" path="/var/lib/kubelet/pods/d06ee632-6fed-4e8b-a8e1-db2f0d542f97/volumes" Nov 29 07:25:34 crc kubenswrapper[4731]: I1129 07:25:34.751262 4731 generic.go:334] "Generic (PLEG): container finished" podID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerID="cd3d1d4d3d977a785c235b96341e9713c77ff855fc1998a86804ef4ea93e3c2b" exitCode=143 Nov 29 07:25:34 crc kubenswrapper[4731]: I1129 07:25:34.751749 4731 generic.go:334] "Generic (PLEG): container finished" podID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerID="fe85d4e9a4ac80e776b6b3382275735ff1a8e2a5ffe928e4e6d968dc93108793" exitCode=143 Nov 29 07:25:34 crc kubenswrapper[4731]: I1129 07:25:34.751676 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f","Type":"ContainerDied","Data":"cd3d1d4d3d977a785c235b96341e9713c77ff855fc1998a86804ef4ea93e3c2b"} Nov 29 07:25:34 crc kubenswrapper[4731]: I1129 07:25:34.751880 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f","Type":"ContainerDied","Data":"fe85d4e9a4ac80e776b6b3382275735ff1a8e2a5ffe928e4e6d968dc93108793"} Nov 29 07:25:34 crc kubenswrapper[4731]: I1129 07:25:34.756871 4731 generic.go:334] "Generic (PLEG): container finished" podID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerID="c1b26afdb352e68a7b6dc66491d18c720326fa8eddfd67339f39e26398994e79" exitCode=143 Nov 29 07:25:34 crc kubenswrapper[4731]: I1129 07:25:34.756912 4731 generic.go:334] "Generic (PLEG): container finished" podID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerID="e9e2f9fd3156819c49c80a4f68ae47e4624468f3d69dd7c55e38812ad1cdb4a6" exitCode=143 Nov 29 07:25:34 crc kubenswrapper[4731]: I1129 07:25:34.756945 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dabc1a57-987c-452a-bb15-b26368e6cab2","Type":"ContainerDied","Data":"c1b26afdb352e68a7b6dc66491d18c720326fa8eddfd67339f39e26398994e79"} Nov 29 07:25:34 crc kubenswrapper[4731]: I1129 07:25:34.756990 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dabc1a57-987c-452a-bb15-b26368e6cab2","Type":"ContainerDied","Data":"e9e2f9fd3156819c49c80a4f68ae47e4624468f3d69dd7c55e38812ad1cdb4a6"} Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.805301 4731 generic.go:334] "Generic (PLEG): container finished" podID="38f8e9cf-be31-447d-9e2f-0efad4bc3703" containerID="d6c338e540d22684df8ae1e7ddc644d39bff3e11e8f01edf5d0aca9da74af4e0" exitCode=0 Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.805391 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kb6sz" event={"ID":"38f8e9cf-be31-447d-9e2f-0efad4bc3703","Type":"ContainerDied","Data":"d6c338e540d22684df8ae1e7ddc644d39bff3e11e8f01edf5d0aca9da74af4e0"} Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.858374 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c4c555987-wx7mp"] Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.909503 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84cd78f644-7wncn"] Nov 29 07:25:35 crc kubenswrapper[4731]: E1129 07:25:35.910090 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06ee632-6fed-4e8b-a8e1-db2f0d542f97" containerName="init" Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.910121 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06ee632-6fed-4e8b-a8e1-db2f0d542f97" containerName="init" Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.910318 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06ee632-6fed-4e8b-a8e1-db2f0d542f97" containerName="init" Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.911503 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.913416 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.933442 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84cd78f644-7wncn"] Nov 29 07:25:35 crc kubenswrapper[4731]: I1129 07:25:35.991042 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8b8b69b5-p8jtq"] Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.027647 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fcdbcfb48-gmbcm"] Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.029119 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-secret-key\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.029178 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-logs\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.029223 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-combined-ca-bundle\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.029251 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-tls-certs\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.029309 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-config-data\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.029327 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-scripts\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.029350 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rkvw\" (UniqueName: \"kubernetes.io/projected/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-kube-api-access-5rkvw\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.029620 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.066736 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fcdbcfb48-gmbcm"] Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.135999 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-horizon-secret-key\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136097 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3afcf821-ab23-4e13-96e7-2b178314bece-scripts\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136201 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-secret-key\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136240 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-logs\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136490 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-combined-ca-bundle\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136593 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-tls-certs\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136626 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-horizon-tls-certs\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136664 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-combined-ca-bundle\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136704 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afcf821-ab23-4e13-96e7-2b178314bece-logs\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136761 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3afcf821-ab23-4e13-96e7-2b178314bece-config-data\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136817 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nc6l\" (UniqueName: \"kubernetes.io/projected/3afcf821-ab23-4e13-96e7-2b178314bece-kube-api-access-6nc6l\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136908 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-config-data\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136935 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-scripts\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.136968 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rkvw\" (UniqueName: \"kubernetes.io/projected/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-kube-api-access-5rkvw\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.137346 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-logs\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.139299 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-config-data\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.146134 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-secret-key\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.146516 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-combined-ca-bundle\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.147764 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-scripts\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.148531 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-tls-certs\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.162046 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rkvw\" (UniqueName: \"kubernetes.io/projected/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-kube-api-access-5rkvw\") pod \"horizon-84cd78f644-7wncn\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.241121 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-horizon-tls-certs\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.241174 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-combined-ca-bundle\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.241209 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afcf821-ab23-4e13-96e7-2b178314bece-logs\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.241251 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3afcf821-ab23-4e13-96e7-2b178314bece-config-data\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.241291 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nc6l\" (UniqueName: \"kubernetes.io/projected/3afcf821-ab23-4e13-96e7-2b178314bece-kube-api-access-6nc6l\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.241600 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-horizon-secret-key\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.241652 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3afcf821-ab23-4e13-96e7-2b178314bece-scripts\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.242413 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3afcf821-ab23-4e13-96e7-2b178314bece-scripts\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.243881 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3afcf821-ab23-4e13-96e7-2b178314bece-config-data\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.246374 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-horizon-tls-certs\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.248835 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afcf821-ab23-4e13-96e7-2b178314bece-logs\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.249975 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-combined-ca-bundle\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.250399 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.259909 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3afcf821-ab23-4e13-96e7-2b178314bece-horizon-secret-key\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.265257 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nc6l\" (UniqueName: \"kubernetes.io/projected/3afcf821-ab23-4e13-96e7-2b178314bece-kube-api-access-6nc6l\") pod \"horizon-5fcdbcfb48-gmbcm\" (UID: \"3afcf821-ab23-4e13-96e7-2b178314bece\") " pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:36 crc kubenswrapper[4731]: I1129 07:25:36.359433 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:25:38 crc kubenswrapper[4731]: I1129 07:25:38.365229 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:25:38 crc kubenswrapper[4731]: I1129 07:25:38.445977 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r66tt"] Nov 29 07:25:38 crc kubenswrapper[4731]: I1129 07:25:38.446199 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" containerID="cri-o://28c697f4aec0a9f9c8e62a1117fd7812fd31deb9dbfa342f47843f946a9be410" gracePeriod=10 Nov 29 07:25:39 crc kubenswrapper[4731]: I1129 07:25:39.795845 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 29 07:25:40 crc kubenswrapper[4731]: I1129 07:25:40.861437 4731 generic.go:334] "Generic (PLEG): container finished" podID="4716973e-a6ae-4baf-bb88-5436489c5451" containerID="28c697f4aec0a9f9c8e62a1117fd7812fd31deb9dbfa342f47843f946a9be410" exitCode=0 Nov 29 07:25:40 crc kubenswrapper[4731]: I1129 07:25:40.861529 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" event={"ID":"4716973e-a6ae-4baf-bb88-5436489c5451","Type":"ContainerDied","Data":"28c697f4aec0a9f9c8e62a1117fd7812fd31deb9dbfa342f47843f946a9be410"} Nov 29 07:25:44 crc kubenswrapper[4731]: I1129 07:25:44.796364 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 29 07:25:49 crc kubenswrapper[4731]: I1129 07:25:49.795702 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 29 07:25:49 crc kubenswrapper[4731]: I1129 07:25:49.796396 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:25:51 crc kubenswrapper[4731]: E1129 07:25:51.043903 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 29 07:25:51 crc kubenswrapper[4731]: E1129 07:25:51.044626 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh684h5b9hd5h698h67bhb8h57dh5c4h89hf9h84h65chdbhb8h6fh76h68bh66h74h65fh5c9h554h8ch68ch5c6h55h98hbdh64h687hf6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntf4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-676b8dc849-9dqb8_openstack(b7e2529e-00f9-4933-a8f3-4fdc9f8c498f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:25:51 crc kubenswrapper[4731]: E1129 07:25:51.047880 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-676b8dc849-9dqb8" podUID="b7e2529e-00f9-4933-a8f3-4fdc9f8c498f" Nov 29 07:25:52 crc kubenswrapper[4731]: E1129 07:25:52.885693 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 29 07:25:52 crc kubenswrapper[4731]: E1129 07:25:52.887212 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hd5r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-x6bxr_openstack(13bcd648-c6e2-4b6e-a660-da2f47f09a06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:25:52 crc kubenswrapper[4731]: E1129 07:25:52.888474 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-x6bxr" podUID="13bcd648-c6e2-4b6e-a660-da2f47f09a06" Nov 29 07:25:52 crc kubenswrapper[4731]: I1129 07:25:52.988077 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.003831 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.005202 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676b8dc849-9dqb8" event={"ID":"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f","Type":"ContainerDied","Data":"c5d5f11cf7c7d40f65476ebc829b3617bd989fd19db46f5947a9aa057c807875"} Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.005355 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676b8dc849-9dqb8" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.016162 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.016689 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f","Type":"ContainerDied","Data":"d1edfc5259d3c6d11de248be0dca39c96070cfe36e6f0ea52db3b70c7ac76b6a"} Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.016756 4731 scope.go:117] "RemoveContainer" containerID="cd3d1d4d3d977a785c235b96341e9713c77ff855fc1998a86804ef4ea93e3c2b" Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.019614 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-x6bxr" podUID="13bcd648-c6e2-4b6e-a660-da2f47f09a06" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.030785 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-config-data\") pod \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.030855 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-internal-tls-certs\") pod \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.030932 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-combined-ca-bundle\") pod \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031013 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-httpd-run\") pod \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031061 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-config-data\") pod \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031145 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-horizon-secret-key\") pod \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031197 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcw6z\" (UniqueName: \"kubernetes.io/projected/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-kube-api-access-tcw6z\") pod \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031236 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-logs\") pod \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031274 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031296 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntf4g\" (UniqueName: \"kubernetes.io/projected/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-kube-api-access-ntf4g\") pod \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031330 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-logs\") pod \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031363 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-scripts\") pod \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\" (UID: \"b7e2529e-00f9-4933-a8f3-4fdc9f8c498f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.031418 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-scripts\") pod \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\" (UID: \"f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f\") " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.034044 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-logs" (OuterVolumeSpecName: "logs") pod "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f" (UID: "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.034872 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-config-data" (OuterVolumeSpecName: "config-data") pod "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f" (UID: "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.035120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" (UID: "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.037100 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-logs" (OuterVolumeSpecName: "logs") pod "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" (UID: "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.037600 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-scripts" (OuterVolumeSpecName: "scripts") pod "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f" (UID: "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.042317 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" (UID: "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.043922 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-kube-api-access-ntf4g" (OuterVolumeSpecName: "kube-api-access-ntf4g") pod "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f" (UID: "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f"). InnerVolumeSpecName "kube-api-access-ntf4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.044863 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-kube-api-access-tcw6z" (OuterVolumeSpecName: "kube-api-access-tcw6z") pod "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" (UID: "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f"). InnerVolumeSpecName "kube-api-access-tcw6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.046701 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f" (UID: "b7e2529e-00f9-4933-a8f3-4fdc9f8c498f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.077374 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-scripts" (OuterVolumeSpecName: "scripts") pod "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" (UID: "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.082367 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" (UID: "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.116223 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-config-data" (OuterVolumeSpecName: "config-data") pod "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" (UID: "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134168 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134220 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134236 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134247 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134256 4731 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134265 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcw6z\" (UniqueName: \"kubernetes.io/projected/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-kube-api-access-tcw6z\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134279 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134317 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134327 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntf4g\" (UniqueName: \"kubernetes.io/projected/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-kube-api-access-ntf4g\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134337 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134349 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.134357 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.153630 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" (UID: "f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.159846 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.230093 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.230625 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b4h68dh696h5bbh5fh64dh66dh558h58dhfhddh4h567hffh85hfbh69h595h687hbfh65hd8h675h5fbh586h58fh654h58bh5d5h565h589h5d4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7c4c555987-wx7mp_openstack(1f57a857-cd62-458f-9a5f-1451ff9d5628): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.233290 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7c4c555987-wx7mp" podUID="1f57a857-cd62-458f-9a5f-1451ff9d5628" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.236620 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.236678 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.368303 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.395518 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.425575 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.426062 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerName="glance-httpd" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.426081 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerName="glance-httpd" Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.426099 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerName="glance-log" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.426106 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerName="glance-log" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.426299 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerName="glance-log" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.426319 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" containerName="glance-httpd" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.427366 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.436132 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.440108 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.443927 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mhm\" (UniqueName: \"kubernetes.io/projected/d4a309b6-ff09-434a-8d65-9dd888a25dab-kube-api-access-v8mhm\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.444006 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.444054 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.444157 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.444205 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.444241 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.444306 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.444370 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.463291 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-676b8dc849-9dqb8"] Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.472852 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-676b8dc849-9dqb8"] Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.486713 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.546172 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mhm\" (UniqueName: \"kubernetes.io/projected/d4a309b6-ff09-434a-8d65-9dd888a25dab-kube-api-access-v8mhm\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.546262 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.546294 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.546341 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.546374 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.546405 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.546459 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.546508 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.547692 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.548052 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.548039 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.552692 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.553940 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.554516 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.557436 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.578682 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mhm\" (UniqueName: \"kubernetes.io/projected/d4a309b6-ff09-434a-8d65-9dd888a25dab-kube-api-access-v8mhm\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.601173 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.770911 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.818284 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e2529e-00f9-4933-a8f3-4fdc9f8c498f" path="/var/lib/kubelet/pods/b7e2529e-00f9-4933-a8f3-4fdc9f8c498f/volumes" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.818861 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f" path="/var/lib/kubelet/pods/f33d2f2a-69bd-48c4-8d3b-b2f1e0b9d87f/volumes" Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.912081 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.912388 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dk5q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-fbk9s_openstack(a4589d89-a761-4510-bd4c-55a6a3e620c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:25:53 crc kubenswrapper[4731]: E1129 07:25:53.914026 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-fbk9s" podUID="a4589d89-a761-4510-bd4c-55a6a3e620c4" Nov 29 07:25:53 crc kubenswrapper[4731]: I1129 07:25:53.977410 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.042526 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kb6sz" event={"ID":"38f8e9cf-be31-447d-9e2f-0efad4bc3703","Type":"ContainerDied","Data":"259082b715a29211f8f9408a3e5123b67c7c6fdc87db56f7cf151e2caf20ff02"} Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.043289 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="259082b715a29211f8f9408a3e5123b67c7c6fdc87db56f7cf151e2caf20ff02" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.043248 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kb6sz" Nov 29 07:25:55 crc kubenswrapper[4731]: E1129 07:25:54.044653 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-fbk9s" podUID="a4589d89-a761-4510-bd4c-55a6a3e620c4" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.072455 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-config-data\") pod \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.072637 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-fernet-keys\") pod \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.072821 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-credential-keys\") pod \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.074291 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-scripts\") pod \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.074317 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-combined-ca-bundle\") pod \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.074552 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j9fz\" (UniqueName: \"kubernetes.io/projected/38f8e9cf-be31-447d-9e2f-0efad4bc3703-kube-api-access-8j9fz\") pod \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\" (UID: \"38f8e9cf-be31-447d-9e2f-0efad4bc3703\") " Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.080751 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-scripts" (OuterVolumeSpecName: "scripts") pod "38f8e9cf-be31-447d-9e2f-0efad4bc3703" (UID: "38f8e9cf-be31-447d-9e2f-0efad4bc3703"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.090124 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "38f8e9cf-be31-447d-9e2f-0efad4bc3703" (UID: "38f8e9cf-be31-447d-9e2f-0efad4bc3703"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.102258 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f8e9cf-be31-447d-9e2f-0efad4bc3703-kube-api-access-8j9fz" (OuterVolumeSpecName: "kube-api-access-8j9fz") pod "38f8e9cf-be31-447d-9e2f-0efad4bc3703" (UID: "38f8e9cf-be31-447d-9e2f-0efad4bc3703"). InnerVolumeSpecName "kube-api-access-8j9fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.107344 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "38f8e9cf-be31-447d-9e2f-0efad4bc3703" (UID: "38f8e9cf-be31-447d-9e2f-0efad4bc3703"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.138328 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-config-data" (OuterVolumeSpecName: "config-data") pod "38f8e9cf-be31-447d-9e2f-0efad4bc3703" (UID: "38f8e9cf-be31-447d-9e2f-0efad4bc3703"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.142021 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38f8e9cf-be31-447d-9e2f-0efad4bc3703" (UID: "38f8e9cf-be31-447d-9e2f-0efad4bc3703"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.180511 4731 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.180553 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.180663 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.180681 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j9fz\" (UniqueName: \"kubernetes.io/projected/38f8e9cf-be31-447d-9e2f-0efad4bc3703-kube-api-access-8j9fz\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.180695 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:54.180722 4731 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f8e9cf-be31-447d-9e2f-0efad4bc3703-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.071441 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kb6sz"] Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.081545 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kb6sz"] Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.172929 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j6pdq"] Nov 29 07:25:55 crc kubenswrapper[4731]: E1129 07:25:55.173364 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f8e9cf-be31-447d-9e2f-0efad4bc3703" containerName="keystone-bootstrap" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.173380 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f8e9cf-be31-447d-9e2f-0efad4bc3703" containerName="keystone-bootstrap" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.173558 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f8e9cf-be31-447d-9e2f-0efad4bc3703" containerName="keystone-bootstrap" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.174182 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.176832 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.176979 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.177127 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4wnsc" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.180065 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.181598 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.210075 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-config-data\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.210134 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-fernet-keys\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.210288 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-scripts\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.210316 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-combined-ca-bundle\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.210362 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9sb2\" (UniqueName: \"kubernetes.io/projected/dbe696e8-b9af-4710-a81f-4fb69481cf3b-kube-api-access-f9sb2\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.210396 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-credential-keys\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.245985 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j6pdq"] Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.313536 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-scripts\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.314620 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-combined-ca-bundle\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.314732 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9sb2\" (UniqueName: \"kubernetes.io/projected/dbe696e8-b9af-4710-a81f-4fb69481cf3b-kube-api-access-f9sb2\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.314798 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-credential-keys\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.314910 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-config-data\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.314955 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-fernet-keys\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.325709 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-credential-keys\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.325944 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-fernet-keys\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.326014 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-combined-ca-bundle\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.326710 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-config-data\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.330260 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-scripts\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.331143 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9sb2\" (UniqueName: \"kubernetes.io/projected/dbe696e8-b9af-4710-a81f-4fb69481cf3b-kube-api-access-f9sb2\") pod \"keystone-bootstrap-j6pdq\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.492970 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:25:55 crc kubenswrapper[4731]: I1129 07:25:55.822049 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f8e9cf-be31-447d-9e2f-0efad4bc3703" path="/var/lib/kubelet/pods/38f8e9cf-be31-447d-9e2f-0efad4bc3703/volumes" Nov 29 07:25:58 crc kubenswrapper[4731]: I1129 07:25:58.467048 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:25:58 crc kubenswrapper[4731]: I1129 07:25:58.467856 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:25:59 crc kubenswrapper[4731]: I1129 07:25:59.795619 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.698297 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.710202 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.725862 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730072 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-config-data\") pod \"1f57a857-cd62-458f-9a5f-1451ff9d5628\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730262 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-scripts\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730346 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtrhq\" (UniqueName: \"kubernetes.io/projected/1f57a857-cd62-458f-9a5f-1451ff9d5628-kube-api-access-rtrhq\") pod \"1f57a857-cd62-458f-9a5f-1451ff9d5628\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730413 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-public-tls-certs\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730479 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-config-data\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730544 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-httpd-run\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730593 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f57a857-cd62-458f-9a5f-1451ff9d5628-logs\") pod \"1f57a857-cd62-458f-9a5f-1451ff9d5628\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730623 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f57a857-cd62-458f-9a5f-1451ff9d5628-horizon-secret-key\") pod \"1f57a857-cd62-458f-9a5f-1451ff9d5628\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730692 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-combined-ca-bundle\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730759 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-logs\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730868 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-scripts\") pod \"1f57a857-cd62-458f-9a5f-1451ff9d5628\" (UID: \"1f57a857-cd62-458f-9a5f-1451ff9d5628\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730914 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n27fm\" (UniqueName: \"kubernetes.io/projected/dabc1a57-987c-452a-bb15-b26368e6cab2-kube-api-access-n27fm\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.730909 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-config-data" (OuterVolumeSpecName: "config-data") pod "1f57a857-cd62-458f-9a5f-1451ff9d5628" (UID: "1f57a857-cd62-458f-9a5f-1451ff9d5628"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.731088 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.732253 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-logs" (OuterVolumeSpecName: "logs") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.732652 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-scripts" (OuterVolumeSpecName: "scripts") pod "1f57a857-cd62-458f-9a5f-1451ff9d5628" (UID: "1f57a857-cd62-458f-9a5f-1451ff9d5628"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.733321 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.733340 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.733369 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f57a857-cd62-458f-9a5f-1451ff9d5628-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.734932 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f57a857-cd62-458f-9a5f-1451ff9d5628-logs" (OuterVolumeSpecName: "logs") pod "1f57a857-cd62-458f-9a5f-1451ff9d5628" (UID: "1f57a857-cd62-458f-9a5f-1451ff9d5628"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.734953 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.757046 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.757134 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-scripts" (OuterVolumeSpecName: "scripts") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.757269 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f57a857-cd62-458f-9a5f-1451ff9d5628-kube-api-access-rtrhq" (OuterVolumeSpecName: "kube-api-access-rtrhq") pod "1f57a857-cd62-458f-9a5f-1451ff9d5628" (UID: "1f57a857-cd62-458f-9a5f-1451ff9d5628"). InnerVolumeSpecName "kube-api-access-rtrhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.763957 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f57a857-cd62-458f-9a5f-1451ff9d5628-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1f57a857-cd62-458f-9a5f-1451ff9d5628" (UID: "1f57a857-cd62-458f-9a5f-1451ff9d5628"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.767887 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabc1a57-987c-452a-bb15-b26368e6cab2-kube-api-access-n27fm" (OuterVolumeSpecName: "kube-api-access-n27fm") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "kube-api-access-n27fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.790738 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.803073 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.837443 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-config-data" (OuterVolumeSpecName: "config-data") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.837710 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-nb\") pod \"4716973e-a6ae-4baf-bb88-5436489c5451\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.837773 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-config-data\") pod \"dabc1a57-987c-452a-bb15-b26368e6cab2\" (UID: \"dabc1a57-987c-452a-bb15-b26368e6cab2\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.837855 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-config\") pod \"4716973e-a6ae-4baf-bb88-5436489c5451\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " Nov 29 07:26:03 crc kubenswrapper[4731]: W1129 07:26:03.837935 4731 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/dabc1a57-987c-452a-bb15-b26368e6cab2/volumes/kubernetes.io~secret/config-data Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.837952 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-config-data" (OuterVolumeSpecName: "config-data") pod "dabc1a57-987c-452a-bb15-b26368e6cab2" (UID: "dabc1a57-987c-452a-bb15-b26368e6cab2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.837942 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-sb\") pod \"4716973e-a6ae-4baf-bb88-5436489c5451\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.838023 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9d7n\" (UniqueName: \"kubernetes.io/projected/4716973e-a6ae-4baf-bb88-5436489c5451-kube-api-access-d9d7n\") pod \"4716973e-a6ae-4baf-bb88-5436489c5451\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.838153 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-dns-svc\") pod \"4716973e-a6ae-4baf-bb88-5436489c5451\" (UID: \"4716973e-a6ae-4baf-bb88-5436489c5451\") " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839073 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dabc1a57-987c-452a-bb15-b26368e6cab2-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839102 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f57a857-cd62-458f-9a5f-1451ff9d5628-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839116 4731 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f57a857-cd62-458f-9a5f-1451ff9d5628-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839130 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839144 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n27fm\" (UniqueName: \"kubernetes.io/projected/dabc1a57-987c-452a-bb15-b26368e6cab2-kube-api-access-n27fm\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839171 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839183 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839196 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtrhq\" (UniqueName: \"kubernetes.io/projected/1f57a857-cd62-458f-9a5f-1451ff9d5628-kube-api-access-rtrhq\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839208 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.839222 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabc1a57-987c-452a-bb15-b26368e6cab2-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.847165 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4716973e-a6ae-4baf-bb88-5436489c5451-kube-api-access-d9d7n" (OuterVolumeSpecName: "kube-api-access-d9d7n") pod "4716973e-a6ae-4baf-bb88-5436489c5451" (UID: "4716973e-a6ae-4baf-bb88-5436489c5451"). InnerVolumeSpecName "kube-api-access-d9d7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.865327 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.888531 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-config" (OuterVolumeSpecName: "config") pod "4716973e-a6ae-4baf-bb88-5436489c5451" (UID: "4716973e-a6ae-4baf-bb88-5436489c5451"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.891612 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4716973e-a6ae-4baf-bb88-5436489c5451" (UID: "4716973e-a6ae-4baf-bb88-5436489c5451"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.892864 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4716973e-a6ae-4baf-bb88-5436489c5451" (UID: "4716973e-a6ae-4baf-bb88-5436489c5451"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.896224 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4716973e-a6ae-4baf-bb88-5436489c5451" (UID: "4716973e-a6ae-4baf-bb88-5436489c5451"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.940926 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.941243 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.941313 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.941369 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9d7n\" (UniqueName: \"kubernetes.io/projected/4716973e-a6ae-4baf-bb88-5436489c5451-kube-api-access-d9d7n\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.941531 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:03 crc kubenswrapper[4731]: I1129 07:26:03.941616 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4716973e-a6ae-4baf-bb88-5436489c5451-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.145141 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" event={"ID":"4716973e-a6ae-4baf-bb88-5436489c5451","Type":"ContainerDied","Data":"6a150fe7a3025d826430f403b714dd71c126ea975556bc6fbc4a56fe1a902aef"} Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.145233 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.148552 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4c555987-wx7mp" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.149298 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4c555987-wx7mp" event={"ID":"1f57a857-cd62-458f-9a5f-1451ff9d5628","Type":"ContainerDied","Data":"beb72452e189a402f4e8f6a636a19eab75bf10eebfb0bd0a0e16dfa809640071"} Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.168903 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dabc1a57-987c-452a-bb15-b26368e6cab2","Type":"ContainerDied","Data":"abcacabf27ed11d77ef7b6c7517bd2b3c96c6b73191775b25aea804ebf93fbb9"} Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.169030 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.232864 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c4c555987-wx7mp"] Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.253710 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c4c555987-wx7mp"] Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.268826 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.278983 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.289551 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r66tt"] Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.298033 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r66tt"] Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.305332 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:04 crc kubenswrapper[4731]: E1129 07:26:04.305833 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerName="glance-httpd" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.305855 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerName="glance-httpd" Nov 29 07:26:04 crc kubenswrapper[4731]: E1129 07:26:04.305884 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.305892 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" Nov 29 07:26:04 crc kubenswrapper[4731]: E1129 07:26:04.305903 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="init" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.305911 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="init" Nov 29 07:26:04 crc kubenswrapper[4731]: E1129 07:26:04.305944 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerName="glance-log" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.305951 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerName="glance-log" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.306855 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerName="glance-httpd" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.306880 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" containerName="glance-log" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.306891 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.307953 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.312243 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.312393 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.313803 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.348405 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.348487 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.348642 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.348708 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-logs\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.348732 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.348774 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.348794 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.348815 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp82v\" (UniqueName: \"kubernetes.io/projected/f6b502e2-80f2-44f7-9665-3666c7a7c56b-kube-api-access-vp82v\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.451114 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-logs\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.451185 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.451232 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.451262 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.451291 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp82v\" (UniqueName: \"kubernetes.io/projected/f6b502e2-80f2-44f7-9665-3666c7a7c56b-kube-api-access-vp82v\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.451352 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.451417 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.451481 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.452355 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.453397 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.453544 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-logs\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.457630 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.459414 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.459851 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.461839 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.481078 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp82v\" (UniqueName: \"kubernetes.io/projected/f6b502e2-80f2-44f7-9665-3666c7a7c56b-kube-api-access-vp82v\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.487991 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.635741 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:26:04 crc kubenswrapper[4731]: I1129 07:26:04.796231 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r66tt" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Nov 29 07:26:04 crc kubenswrapper[4731]: E1129 07:26:04.939185 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 29 07:26:04 crc kubenswrapper[4731]: E1129 07:26:04.939357 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h5dbhd9h656hdbh57bh7dh87h59ch68dh5bbh9fhb7hfbh647h6dh554h5c8hfbh65bhd7hc9h655h695h58fh55bh5b6hc4hbdh7dhc4h56q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qslbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-8b8b69b5-p8jtq_openstack(4169afaa-8657-4e8c-bac2-fd640f9ed116): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:26:04 crc kubenswrapper[4731]: E1129 07:26:04.941471 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-8b8b69b5-p8jtq" podUID="4169afaa-8657-4e8c-bac2-fd640f9ed116" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.450319 4731 scope.go:117] "RemoveContainer" containerID="fe85d4e9a4ac80e776b6b3382275735ff1a8e2a5ffe928e4e6d968dc93108793" Nov 29 07:26:05 crc kubenswrapper[4731]: E1129 07:26:05.454633 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 29 07:26:05 crc kubenswrapper[4731]: E1129 07:26:05.454771 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qh9fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-zcx9z_openstack(9af027cc-cbd4-4f3a-ad25-2ef5b126d590): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:26:05 crc kubenswrapper[4731]: E1129 07:26:05.456156 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-zcx9z" podUID="9af027cc-cbd4-4f3a-ad25-2ef5b126d590" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.648831 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.668633 4731 scope.go:117] "RemoveContainer" containerID="28c697f4aec0a9f9c8e62a1117fd7812fd31deb9dbfa342f47843f946a9be410" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.679704 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-scripts\") pod \"4169afaa-8657-4e8c-bac2-fd640f9ed116\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.679827 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-config-data\") pod \"4169afaa-8657-4e8c-bac2-fd640f9ed116\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.679870 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4169afaa-8657-4e8c-bac2-fd640f9ed116-logs\") pod \"4169afaa-8657-4e8c-bac2-fd640f9ed116\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.679954 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qslbx\" (UniqueName: \"kubernetes.io/projected/4169afaa-8657-4e8c-bac2-fd640f9ed116-kube-api-access-qslbx\") pod \"4169afaa-8657-4e8c-bac2-fd640f9ed116\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.680170 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4169afaa-8657-4e8c-bac2-fd640f9ed116-horizon-secret-key\") pod \"4169afaa-8657-4e8c-bac2-fd640f9ed116\" (UID: \"4169afaa-8657-4e8c-bac2-fd640f9ed116\") " Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.680981 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4169afaa-8657-4e8c-bac2-fd640f9ed116-logs" (OuterVolumeSpecName: "logs") pod "4169afaa-8657-4e8c-bac2-fd640f9ed116" (UID: "4169afaa-8657-4e8c-bac2-fd640f9ed116"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.681588 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-scripts" (OuterVolumeSpecName: "scripts") pod "4169afaa-8657-4e8c-bac2-fd640f9ed116" (UID: "4169afaa-8657-4e8c-bac2-fd640f9ed116"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.681864 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-config-data" (OuterVolumeSpecName: "config-data") pod "4169afaa-8657-4e8c-bac2-fd640f9ed116" (UID: "4169afaa-8657-4e8c-bac2-fd640f9ed116"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.682502 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.682534 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4169afaa-8657-4e8c-bac2-fd640f9ed116-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.682546 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4169afaa-8657-4e8c-bac2-fd640f9ed116-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.691499 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4169afaa-8657-4e8c-bac2-fd640f9ed116-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4169afaa-8657-4e8c-bac2-fd640f9ed116" (UID: "4169afaa-8657-4e8c-bac2-fd640f9ed116"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.698997 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4169afaa-8657-4e8c-bac2-fd640f9ed116-kube-api-access-qslbx" (OuterVolumeSpecName: "kube-api-access-qslbx") pod "4169afaa-8657-4e8c-bac2-fd640f9ed116" (UID: "4169afaa-8657-4e8c-bac2-fd640f9ed116"). InnerVolumeSpecName "kube-api-access-qslbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.783165 4731 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4169afaa-8657-4e8c-bac2-fd640f9ed116-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.783207 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qslbx\" (UniqueName: \"kubernetes.io/projected/4169afaa-8657-4e8c-bac2-fd640f9ed116-kube-api-access-qslbx\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.785151 4731 scope.go:117] "RemoveContainer" containerID="56c28385753d1299ecf570bbcc74b81d67f913e69533a9aa253cf45f3aed2895" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.818260 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f57a857-cd62-458f-9a5f-1451ff9d5628" path="/var/lib/kubelet/pods/1f57a857-cd62-458f-9a5f-1451ff9d5628/volumes" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.818672 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4716973e-a6ae-4baf-bb88-5436489c5451" path="/var/lib/kubelet/pods/4716973e-a6ae-4baf-bb88-5436489c5451/volumes" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.819297 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabc1a57-987c-452a-bb15-b26368e6cab2" path="/var/lib/kubelet/pods/dabc1a57-987c-452a-bb15-b26368e6cab2/volumes" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.823148 4731 scope.go:117] "RemoveContainer" containerID="c1b26afdb352e68a7b6dc66491d18c720326fa8eddfd67339f39e26398994e79" Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.897786 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84cd78f644-7wncn"] Nov 29 07:26:05 crc kubenswrapper[4731]: I1129 07:26:05.901982 4731 scope.go:117] "RemoveContainer" containerID="e9e2f9fd3156819c49c80a4f68ae47e4624468f3d69dd7c55e38812ad1cdb4a6" Nov 29 07:26:05 crc kubenswrapper[4731]: W1129 07:26:05.910979 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf7f6cfb_9c72_4be9_9177_cd14712e1c1e.slice/crio-6d0154573933657cb4ec63ae6bf40de2bdbbf019f897dba5c1fbf5b23f956123 WatchSource:0}: Error finding container 6d0154573933657cb4ec63ae6bf40de2bdbbf019f897dba5c1fbf5b23f956123: Status 404 returned error can't find the container with id 6d0154573933657cb4ec63ae6bf40de2bdbbf019f897dba5c1fbf5b23f956123 Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.000199 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fcdbcfb48-gmbcm"] Nov 29 07:26:06 crc kubenswrapper[4731]: W1129 07:26:06.005747 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3afcf821_ab23_4e13_96e7_2b178314bece.slice/crio-7556de3485ae2e0eab94ddac8bc3156bcbdc504cb0c3e755ca539d9562e6877d WatchSource:0}: Error finding container 7556de3485ae2e0eab94ddac8bc3156bcbdc504cb0c3e755ca539d9562e6877d: Status 404 returned error can't find the container with id 7556de3485ae2e0eab94ddac8bc3156bcbdc504cb0c3e755ca539d9562e6877d Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.142210 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j6pdq"] Nov 29 07:26:06 crc kubenswrapper[4731]: W1129 07:26:06.151905 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe696e8_b9af_4710_a81f_4fb69481cf3b.slice/crio-55757c4c87a849fd659eb41f0ee02ac269c7b3699a9e59efff883f645caa87d8 WatchSource:0}: Error finding container 55757c4c87a849fd659eb41f0ee02ac269c7b3699a9e59efff883f645caa87d8: Status 404 returned error can't find the container with id 55757c4c87a849fd659eb41f0ee02ac269c7b3699a9e59efff883f645caa87d8 Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.161982 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.190225 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j6pdq" event={"ID":"dbe696e8-b9af-4710-a81f-4fb69481cf3b","Type":"ContainerStarted","Data":"55757c4c87a849fd659eb41f0ee02ac269c7b3699a9e59efff883f645caa87d8"} Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.192776 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fcdbcfb48-gmbcm" event={"ID":"3afcf821-ab23-4e13-96e7-2b178314bece","Type":"ContainerStarted","Data":"7556de3485ae2e0eab94ddac8bc3156bcbdc504cb0c3e755ca539d9562e6877d"} Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.197896 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerStarted","Data":"80484dabef96a2b4305c712d0f5c21f7e5f78598851160f53d0ecdd920e12b6c"} Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.199908 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8b8b69b5-p8jtq" event={"ID":"4169afaa-8657-4e8c-bac2-fd640f9ed116","Type":"ContainerDied","Data":"426694ab19ceecd7dec38bef157963596ad56076fb7a3d53b274070cbb4b1e97"} Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.199977 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8b8b69b5-p8jtq" Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.207764 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84cd78f644-7wncn" event={"ID":"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e","Type":"ContainerStarted","Data":"6d0154573933657cb4ec63ae6bf40de2bdbbf019f897dba5c1fbf5b23f956123"} Nov 29 07:26:06 crc kubenswrapper[4731]: E1129 07:26:06.211154 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-zcx9z" podUID="9af027cc-cbd4-4f3a-ad25-2ef5b126d590" Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.288429 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8b8b69b5-p8jtq"] Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.299113 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8b8b69b5-p8jtq"] Nov 29 07:26:06 crc kubenswrapper[4731]: I1129 07:26:06.348630 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:26:06 crc kubenswrapper[4731]: W1129 07:26:06.353762 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6b502e2_80f2_44f7_9665_3666c7a7c56b.slice/crio-55deb3523b881efaed4f6f540ac00ab8d143caeec9bc2f252d0e9cfc668a0781 WatchSource:0}: Error finding container 55deb3523b881efaed4f6f540ac00ab8d143caeec9bc2f252d0e9cfc668a0781: Status 404 returned error can't find the container with id 55deb3523b881efaed4f6f540ac00ab8d143caeec9bc2f252d0e9cfc668a0781 Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.125799 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:26:07 crc kubenswrapper[4731]: W1129 07:26:07.130430 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4a309b6_ff09_434a_8d65_9dd888a25dab.slice/crio-d4ccf3ba6d17400c32f7b35274f26432667afe2f072c8dfedaf97cae696a249d WatchSource:0}: Error finding container d4ccf3ba6d17400c32f7b35274f26432667afe2f072c8dfedaf97cae696a249d: Status 404 returned error can't find the container with id d4ccf3ba6d17400c32f7b35274f26432667afe2f072c8dfedaf97cae696a249d Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.240455 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6b502e2-80f2-44f7-9665-3666c7a7c56b","Type":"ContainerStarted","Data":"0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413"} Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.240545 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6b502e2-80f2-44f7-9665-3666c7a7c56b","Type":"ContainerStarted","Data":"55deb3523b881efaed4f6f540ac00ab8d143caeec9bc2f252d0e9cfc668a0781"} Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.248380 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84cd78f644-7wncn" event={"ID":"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e","Type":"ContainerStarted","Data":"233af86133f61c225ab9848a8308c125fc186329b7b7974a653e06432e81629a"} Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.248969 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84cd78f644-7wncn" event={"ID":"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e","Type":"ContainerStarted","Data":"27fa026eb4be33e0970601908f2bd67b51eec9bb4bd79b5ad9e662b251422727"} Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.266303 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j6pdq" event={"ID":"dbe696e8-b9af-4710-a81f-4fb69481cf3b","Type":"ContainerStarted","Data":"012d7698d223529fd48395017381af41ba5acdf3fc9fe83a15e328727eaaafff"} Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.272551 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4a309b6-ff09-434a-8d65-9dd888a25dab","Type":"ContainerStarted","Data":"d4ccf3ba6d17400c32f7b35274f26432667afe2f072c8dfedaf97cae696a249d"} Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.278141 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-84cd78f644-7wncn" podStartSLOduration=31.772586473 podStartE2EDuration="32.278104927s" podCreationTimestamp="2025-11-29 07:25:35 +0000 UTC" firstStartedPulling="2025-11-29 07:26:05.920657291 +0000 UTC m=+1204.811018394" lastFinishedPulling="2025-11-29 07:26:06.426175745 +0000 UTC m=+1205.316536848" observedRunningTime="2025-11-29 07:26:07.272968887 +0000 UTC m=+1206.163329990" watchObservedRunningTime="2025-11-29 07:26:07.278104927 +0000 UTC m=+1206.168466030" Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.278390 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fcdbcfb48-gmbcm" event={"ID":"3afcf821-ab23-4e13-96e7-2b178314bece","Type":"ContainerStarted","Data":"327da4228bd5b14cb14856a543b48c97df82c4e777058551b988c9c3ec7315dc"} Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.329897 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j6pdq" podStartSLOduration=12.329870799 podStartE2EDuration="12.329870799s" podCreationTimestamp="2025-11-29 07:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:07.307749763 +0000 UTC m=+1206.198110866" watchObservedRunningTime="2025-11-29 07:26:07.329870799 +0000 UTC m=+1206.220231902" Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.352925 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fcdbcfb48-gmbcm" podStartSLOduration=31.82144778 podStartE2EDuration="32.352899912s" podCreationTimestamp="2025-11-29 07:25:35 +0000 UTC" firstStartedPulling="2025-11-29 07:26:06.007349193 +0000 UTC m=+1204.897710296" lastFinishedPulling="2025-11-29 07:26:06.538801305 +0000 UTC m=+1205.429162428" observedRunningTime="2025-11-29 07:26:07.330356553 +0000 UTC m=+1206.220717656" watchObservedRunningTime="2025-11-29 07:26:07.352899912 +0000 UTC m=+1206.243261015" Nov 29 07:26:07 crc kubenswrapper[4731]: I1129 07:26:07.832495 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4169afaa-8657-4e8c-bac2-fd640f9ed116" path="/var/lib/kubelet/pods/4169afaa-8657-4e8c-bac2-fd640f9ed116/volumes" Nov 29 07:26:08 crc kubenswrapper[4731]: I1129 07:26:08.290471 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4a309b6-ff09-434a-8d65-9dd888a25dab","Type":"ContainerStarted","Data":"e18b392c7a703626b4da6b904a8ebfbf34639349dcf46fbb7f5eed20217b8fc0"} Nov 29 07:26:08 crc kubenswrapper[4731]: I1129 07:26:08.293356 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fcdbcfb48-gmbcm" event={"ID":"3afcf821-ab23-4e13-96e7-2b178314bece","Type":"ContainerStarted","Data":"19a198cc865eaaed82595d7e971bedf9064858d78b0fb328dbeb921d0defff04"} Nov 29 07:26:08 crc kubenswrapper[4731]: I1129 07:26:08.297673 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6b502e2-80f2-44f7-9665-3666c7a7c56b","Type":"ContainerStarted","Data":"00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b"} Nov 29 07:26:08 crc kubenswrapper[4731]: I1129 07:26:08.331785 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.33176687 podStartE2EDuration="4.33176687s" podCreationTimestamp="2025-11-29 07:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:08.323946182 +0000 UTC m=+1207.214307285" watchObservedRunningTime="2025-11-29 07:26:08.33176687 +0000 UTC m=+1207.222127973" Nov 29 07:26:10 crc kubenswrapper[4731]: I1129 07:26:10.316790 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerStarted","Data":"1eeb175d06560dac595e603e5a440b5bc074e50a82fcd13e045b0876b3180be8"} Nov 29 07:26:10 crc kubenswrapper[4731]: I1129 07:26:10.318433 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fbk9s" event={"ID":"a4589d89-a761-4510-bd4c-55a6a3e620c4","Type":"ContainerStarted","Data":"ba4cba12c8c3bee5b3db297483a31412626bf72e41d5950966b7bcad6321e931"} Nov 29 07:26:10 crc kubenswrapper[4731]: I1129 07:26:10.320641 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4a309b6-ff09-434a-8d65-9dd888a25dab","Type":"ContainerStarted","Data":"f28c3b1ecdc62f22eb2d42b0bcea85656e456557d607ef2880b499bae1d325ee"} Nov 29 07:26:11 crc kubenswrapper[4731]: I1129 07:26:11.354438 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fbk9s" podStartSLOduration=5.614345241 podStartE2EDuration="44.354414572s" podCreationTimestamp="2025-11-29 07:25:27 +0000 UTC" firstStartedPulling="2025-11-29 07:25:29.216197423 +0000 UTC m=+1168.106558516" lastFinishedPulling="2025-11-29 07:26:07.956266744 +0000 UTC m=+1206.846627847" observedRunningTime="2025-11-29 07:26:11.352094364 +0000 UTC m=+1210.242455517" watchObservedRunningTime="2025-11-29 07:26:11.354414572 +0000 UTC m=+1210.244775695" Nov 29 07:26:12 crc kubenswrapper[4731]: I1129 07:26:12.384023 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=19.384002862 podStartE2EDuration="19.384002862s" podCreationTimestamp="2025-11-29 07:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:12.379453539 +0000 UTC m=+1211.269814642" watchObservedRunningTime="2025-11-29 07:26:12.384002862 +0000 UTC m=+1211.274363965" Nov 29 07:26:13 crc kubenswrapper[4731]: I1129 07:26:13.356284 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x6bxr" event={"ID":"13bcd648-c6e2-4b6e-a660-da2f47f09a06","Type":"ContainerStarted","Data":"e1fd7555949e5a475b2a562e40c9e94428dd257ddf922b4101954f25369688f3"} Nov 29 07:26:13 crc kubenswrapper[4731]: I1129 07:26:13.385386 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-x6bxr" podStartSLOduration=3.7161654029999998 podStartE2EDuration="46.385366009s" podCreationTimestamp="2025-11-29 07:25:27 +0000 UTC" firstStartedPulling="2025-11-29 07:25:29.904047123 +0000 UTC m=+1168.794408226" lastFinishedPulling="2025-11-29 07:26:12.573247739 +0000 UTC m=+1211.463608832" observedRunningTime="2025-11-29 07:26:13.379095045 +0000 UTC m=+1212.269456148" watchObservedRunningTime="2025-11-29 07:26:13.385366009 +0000 UTC m=+1212.275727112" Nov 29 07:26:13 crc kubenswrapper[4731]: I1129 07:26:13.770235 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:13 crc kubenswrapper[4731]: I1129 07:26:13.770309 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:13 crc kubenswrapper[4731]: I1129 07:26:13.822638 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:13 crc kubenswrapper[4731]: I1129 07:26:13.826429 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:14 crc kubenswrapper[4731]: I1129 07:26:14.373207 4731 generic.go:334] "Generic (PLEG): container finished" podID="dbe696e8-b9af-4710-a81f-4fb69481cf3b" containerID="012d7698d223529fd48395017381af41ba5acdf3fc9fe83a15e328727eaaafff" exitCode=0 Nov 29 07:26:14 crc kubenswrapper[4731]: I1129 07:26:14.373676 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j6pdq" event={"ID":"dbe696e8-b9af-4710-a81f-4fb69481cf3b","Type":"ContainerDied","Data":"012d7698d223529fd48395017381af41ba5acdf3fc9fe83a15e328727eaaafff"} Nov 29 07:26:14 crc kubenswrapper[4731]: I1129 07:26:14.373754 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:14 crc kubenswrapper[4731]: I1129 07:26:14.374041 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:14 crc kubenswrapper[4731]: I1129 07:26:14.637686 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:26:14 crc kubenswrapper[4731]: I1129 07:26:14.638105 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:26:14 crc kubenswrapper[4731]: I1129 07:26:14.715702 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:26:14 crc kubenswrapper[4731]: I1129 07:26:14.753983 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:26:15 crc kubenswrapper[4731]: I1129 07:26:15.384539 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:26:15 crc kubenswrapper[4731]: I1129 07:26:15.384615 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:26:16 crc kubenswrapper[4731]: I1129 07:26:16.251586 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:26:16 crc kubenswrapper[4731]: I1129 07:26:16.252077 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:26:16 crc kubenswrapper[4731]: I1129 07:26:16.253881 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-84cd78f644-7wncn" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 29 07:26:16 crc kubenswrapper[4731]: I1129 07:26:16.360302 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:26:16 crc kubenswrapper[4731]: I1129 07:26:16.360381 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.132063 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.132280 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.133853 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.650618 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.707397 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9sb2\" (UniqueName: \"kubernetes.io/projected/dbe696e8-b9af-4710-a81f-4fb69481cf3b-kube-api-access-f9sb2\") pod \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.707465 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-config-data\") pod \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.707526 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-combined-ca-bundle\") pod \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.707599 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-credential-keys\") pod \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.707693 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-fernet-keys\") pod \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.707775 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-scripts\") pod \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\" (UID: \"dbe696e8-b9af-4710-a81f-4fb69481cf3b\") " Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.716902 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dbe696e8-b9af-4710-a81f-4fb69481cf3b" (UID: "dbe696e8-b9af-4710-a81f-4fb69481cf3b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.719807 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe696e8-b9af-4710-a81f-4fb69481cf3b-kube-api-access-f9sb2" (OuterVolumeSpecName: "kube-api-access-f9sb2") pod "dbe696e8-b9af-4710-a81f-4fb69481cf3b" (UID: "dbe696e8-b9af-4710-a81f-4fb69481cf3b"). InnerVolumeSpecName "kube-api-access-f9sb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.720528 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-scripts" (OuterVolumeSpecName: "scripts") pod "dbe696e8-b9af-4710-a81f-4fb69481cf3b" (UID: "dbe696e8-b9af-4710-a81f-4fb69481cf3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.734928 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dbe696e8-b9af-4710-a81f-4fb69481cf3b" (UID: "dbe696e8-b9af-4710-a81f-4fb69481cf3b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.754992 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-config-data" (OuterVolumeSpecName: "config-data") pod "dbe696e8-b9af-4710-a81f-4fb69481cf3b" (UID: "dbe696e8-b9af-4710-a81f-4fb69481cf3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.758907 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbe696e8-b9af-4710-a81f-4fb69481cf3b" (UID: "dbe696e8-b9af-4710-a81f-4fb69481cf3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.809383 4731 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.809422 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.809433 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9sb2\" (UniqueName: \"kubernetes.io/projected/dbe696e8-b9af-4710-a81f-4fb69481cf3b-kube-api-access-f9sb2\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.809445 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.809454 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:17 crc kubenswrapper[4731]: I1129 07:26:17.809464 4731 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbe696e8-b9af-4710-a81f-4fb69481cf3b-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.245920 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.246647 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.253088 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.460050 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j6pdq" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.460069 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j6pdq" event={"ID":"dbe696e8-b9af-4710-a81f-4fb69481cf3b","Type":"ContainerDied","Data":"55757c4c87a849fd659eb41f0ee02ac269c7b3699a9e59efff883f645caa87d8"} Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.460146 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55757c4c87a849fd659eb41f0ee02ac269c7b3699a9e59efff883f645caa87d8" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.465286 4731 generic.go:334] "Generic (PLEG): container finished" podID="a4589d89-a761-4510-bd4c-55a6a3e620c4" containerID="ba4cba12c8c3bee5b3db297483a31412626bf72e41d5950966b7bcad6321e931" exitCode=0 Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.466145 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fbk9s" event={"ID":"a4589d89-a761-4510-bd4c-55a6a3e620c4","Type":"ContainerDied","Data":"ba4cba12c8c3bee5b3db297483a31412626bf72e41d5950966b7bcad6321e931"} Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.817282 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74694f6999-x4dvv"] Nov 29 07:26:18 crc kubenswrapper[4731]: E1129 07:26:18.822031 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe696e8-b9af-4710-a81f-4fb69481cf3b" containerName="keystone-bootstrap" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.822050 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe696e8-b9af-4710-a81f-4fb69481cf3b" containerName="keystone-bootstrap" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.822237 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe696e8-b9af-4710-a81f-4fb69481cf3b" containerName="keystone-bootstrap" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.822889 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.828458 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4wnsc" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.828810 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.829020 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.829127 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.829281 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.855639 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.928885 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74694f6999-x4dvv"] Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.934702 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-fernet-keys\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.934918 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-scripts\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.934962 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8j45\" (UniqueName: \"kubernetes.io/projected/2b8bf35f-55bb-445b-b99f-5a418577d482-kube-api-access-g8j45\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.935050 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-public-tls-certs\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.935098 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-combined-ca-bundle\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.935204 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-credential-keys\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.935369 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-config-data\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:18 crc kubenswrapper[4731]: I1129 07:26:18.935438 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-internal-tls-certs\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.037847 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-public-tls-certs\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.037936 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-combined-ca-bundle\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.037991 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-credential-keys\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.038068 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-config-data\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.038106 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-internal-tls-certs\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.038137 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-fernet-keys\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.038196 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-scripts\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.038215 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8j45\" (UniqueName: \"kubernetes.io/projected/2b8bf35f-55bb-445b-b99f-5a418577d482-kube-api-access-g8j45\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.048965 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-config-data\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.051818 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-public-tls-certs\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.052486 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-credential-keys\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.059479 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-fernet-keys\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.059595 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-internal-tls-certs\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.080482 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-combined-ca-bundle\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.083264 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8j45\" (UniqueName: \"kubernetes.io/projected/2b8bf35f-55bb-445b-b99f-5a418577d482-kube-api-access-g8j45\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.091253 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b8bf35f-55bb-445b-b99f-5a418577d482-scripts\") pod \"keystone-74694f6999-x4dvv\" (UID: \"2b8bf35f-55bb-445b-b99f-5a418577d482\") " pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.154449 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.484394 4731 generic.go:334] "Generic (PLEG): container finished" podID="13bcd648-c6e2-4b6e-a660-da2f47f09a06" containerID="e1fd7555949e5a475b2a562e40c9e94428dd257ddf922b4101954f25369688f3" exitCode=0 Nov 29 07:26:19 crc kubenswrapper[4731]: I1129 07:26:19.484869 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x6bxr" event={"ID":"13bcd648-c6e2-4b6e-a660-da2f47f09a06","Type":"ContainerDied","Data":"e1fd7555949e5a475b2a562e40c9e94428dd257ddf922b4101954f25369688f3"} Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.554934 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fbk9s" event={"ID":"a4589d89-a761-4510-bd4c-55a6a3e620c4","Type":"ContainerDied","Data":"61c4b4d82607b50c5ed4236f757d643e8a941cf75f02d4770f893cff31661f79"} Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.555762 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61c4b4d82607b50c5ed4236f757d643e8a941cf75f02d4770f893cff31661f79" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.569319 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.680405 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk5q5\" (UniqueName: \"kubernetes.io/projected/a4589d89-a761-4510-bd4c-55a6a3e620c4-kube-api-access-dk5q5\") pod \"a4589d89-a761-4510-bd4c-55a6a3e620c4\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.680535 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-db-sync-config-data\") pod \"a4589d89-a761-4510-bd4c-55a6a3e620c4\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.680589 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-combined-ca-bundle\") pod \"a4589d89-a761-4510-bd4c-55a6a3e620c4\" (UID: \"a4589d89-a761-4510-bd4c-55a6a3e620c4\") " Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.688510 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a4589d89-a761-4510-bd4c-55a6a3e620c4" (UID: "a4589d89-a761-4510-bd4c-55a6a3e620c4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.691778 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4589d89-a761-4510-bd4c-55a6a3e620c4-kube-api-access-dk5q5" (OuterVolumeSpecName: "kube-api-access-dk5q5") pod "a4589d89-a761-4510-bd4c-55a6a3e620c4" (UID: "a4589d89-a761-4510-bd4c-55a6a3e620c4"). InnerVolumeSpecName "kube-api-access-dk5q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.722859 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4589d89-a761-4510-bd4c-55a6a3e620c4" (UID: "a4589d89-a761-4510-bd4c-55a6a3e620c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.783620 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk5q5\" (UniqueName: \"kubernetes.io/projected/a4589d89-a761-4510-bd4c-55a6a3e620c4-kube-api-access-dk5q5\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.783679 4731 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.783692 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4589d89-a761-4510-bd4c-55a6a3e620c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.845702 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x6bxr" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.919598 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74694f6999-x4dvv"] Nov 29 07:26:20 crc kubenswrapper[4731]: W1129 07:26:20.925917 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b8bf35f_55bb_445b_b99f_5a418577d482.slice/crio-076e5803b5a8134801aecf99aa04b196e7828f32373b38527beefd381c3713a5 WatchSource:0}: Error finding container 076e5803b5a8134801aecf99aa04b196e7828f32373b38527beefd381c3713a5: Status 404 returned error can't find the container with id 076e5803b5a8134801aecf99aa04b196e7828f32373b38527beefd381c3713a5 Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.987748 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-config-data\") pod \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.987856 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-scripts\") pod \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.987929 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd5r7\" (UniqueName: \"kubernetes.io/projected/13bcd648-c6e2-4b6e-a660-da2f47f09a06-kube-api-access-hd5r7\") pod \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.988024 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bcd648-c6e2-4b6e-a660-da2f47f09a06-logs\") pod \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.988105 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-combined-ca-bundle\") pod \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\" (UID: \"13bcd648-c6e2-4b6e-a660-da2f47f09a06\") " Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.993392 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-scripts" (OuterVolumeSpecName: "scripts") pod "13bcd648-c6e2-4b6e-a660-da2f47f09a06" (UID: "13bcd648-c6e2-4b6e-a660-da2f47f09a06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.998120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13bcd648-c6e2-4b6e-a660-da2f47f09a06-logs" (OuterVolumeSpecName: "logs") pod "13bcd648-c6e2-4b6e-a660-da2f47f09a06" (UID: "13bcd648-c6e2-4b6e-a660-da2f47f09a06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:20 crc kubenswrapper[4731]: I1129 07:26:20.998121 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bcd648-c6e2-4b6e-a660-da2f47f09a06-kube-api-access-hd5r7" (OuterVolumeSpecName: "kube-api-access-hd5r7") pod "13bcd648-c6e2-4b6e-a660-da2f47f09a06" (UID: "13bcd648-c6e2-4b6e-a660-da2f47f09a06"). InnerVolumeSpecName "kube-api-access-hd5r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.021446 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-config-data" (OuterVolumeSpecName: "config-data") pod "13bcd648-c6e2-4b6e-a660-da2f47f09a06" (UID: "13bcd648-c6e2-4b6e-a660-da2f47f09a06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.029481 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13bcd648-c6e2-4b6e-a660-da2f47f09a06" (UID: "13bcd648-c6e2-4b6e-a660-da2f47f09a06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.091641 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bcd648-c6e2-4b6e-a660-da2f47f09a06-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.091712 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.091737 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.091756 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bcd648-c6e2-4b6e-a660-da2f47f09a06-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.091774 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd5r7\" (UniqueName: \"kubernetes.io/projected/13bcd648-c6e2-4b6e-a660-da2f47f09a06-kube-api-access-hd5r7\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.569355 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x6bxr" event={"ID":"13bcd648-c6e2-4b6e-a660-da2f47f09a06","Type":"ContainerDied","Data":"9f963e923128ed454e90c2ad79112d4e1cff356ffce67859ea06af0195952097"} Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.569933 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f963e923128ed454e90c2ad79112d4e1cff356ffce67859ea06af0195952097" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.570086 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x6bxr" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.576434 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerStarted","Data":"efc3c73ee1eb2172d2e8d30dc832513f4552861add81034a4aa6c8e1afe42474"} Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.578441 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74694f6999-x4dvv" event={"ID":"2b8bf35f-55bb-445b-b99f-5a418577d482","Type":"ContainerStarted","Data":"060ae31f178354bc864e078b7a0235d2da912aa8b8e54deeab85c1c0af4c7222"} Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.578472 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74694f6999-x4dvv" event={"ID":"2b8bf35f-55bb-445b-b99f-5a418577d482","Type":"ContainerStarted","Data":"076e5803b5a8134801aecf99aa04b196e7828f32373b38527beefd381c3713a5"} Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.578518 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fbk9s" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.624624 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74694f6999-x4dvv" podStartSLOduration=3.624578897 podStartE2EDuration="3.624578897s" podCreationTimestamp="2025-11-29 07:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:21.607387645 +0000 UTC m=+1220.497748758" watchObservedRunningTime="2025-11-29 07:26:21.624578897 +0000 UTC m=+1220.514940000" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.645120 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bdbcc6468-k4knd"] Nov 29 07:26:21 crc kubenswrapper[4731]: E1129 07:26:21.645848 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4589d89-a761-4510-bd4c-55a6a3e620c4" containerName="barbican-db-sync" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.645872 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4589d89-a761-4510-bd4c-55a6a3e620c4" containerName="barbican-db-sync" Nov 29 07:26:21 crc kubenswrapper[4731]: E1129 07:26:21.645895 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bcd648-c6e2-4b6e-a660-da2f47f09a06" containerName="placement-db-sync" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.645905 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bcd648-c6e2-4b6e-a660-da2f47f09a06" containerName="placement-db-sync" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.646152 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bcd648-c6e2-4b6e-a660-da2f47f09a06" containerName="placement-db-sync" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.646179 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4589d89-a761-4510-bd4c-55a6a3e620c4" containerName="barbican-db-sync" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.647529 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.650035 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.653037 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mpvkt" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.653468 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.653941 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.654260 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.678833 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bdbcc6468-k4knd"] Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.813676 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-combined-ca-bundle\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.814042 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db509226-a015-4c26-b8a8-80421cc7d661-logs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.815406 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-public-tls-certs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.815467 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-config-data\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.815531 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-scripts\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.815602 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-internal-tls-certs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.815716 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tnrq\" (UniqueName: \"kubernetes.io/projected/db509226-a015-4c26-b8a8-80421cc7d661-kube-api-access-5tnrq\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.852976 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-64c66558f5-qcqwg"] Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.860471 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.867153 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-txw5r" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.867381 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.868090 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.884290 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c78b8bc9d-8prwv"] Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.902803 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.920875 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.924131 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db509226-a015-4c26-b8a8-80421cc7d661-logs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.924241 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-public-tls-certs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.924283 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-config-data\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.924334 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-scripts\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.924370 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-internal-tls-certs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.924471 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tnrq\" (UniqueName: \"kubernetes.io/projected/db509226-a015-4c26-b8a8-80421cc7d661-kube-api-access-5tnrq\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.924551 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-combined-ca-bundle\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.932796 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db509226-a015-4c26-b8a8-80421cc7d661-logs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.938247 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-public-tls-certs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.976876 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-combined-ca-bundle\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.980614 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64c66558f5-qcqwg"] Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.983362 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-scripts\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:21 crc kubenswrapper[4731]: I1129 07:26:21.990034 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tnrq\" (UniqueName: \"kubernetes.io/projected/db509226-a015-4c26-b8a8-80421cc7d661-kube-api-access-5tnrq\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.029462 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-combined-ca-bundle\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030018 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e3b820-e4ea-46a6-9a98-944bf7718c56-logs\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030194 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-config-data\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030369 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-config-data\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030450 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-config-data-custom\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030529 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-combined-ca-bundle\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030643 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-config-data-custom\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030751 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgh4\" (UniqueName: \"kubernetes.io/projected/46e3b820-e4ea-46a6-9a98-944bf7718c56-kube-api-access-pkgh4\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030962 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxfd\" (UniqueName: \"kubernetes.io/projected/f9dcf660-e92e-44b6-b940-97d0cccdc187-kube-api-access-shxfd\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.031038 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9dcf660-e92e-44b6-b940-97d0cccdc187-logs\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.030436 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-internal-tls-certs\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.040452 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db509226-a015-4c26-b8a8-80421cc7d661-config-data\") pod \"placement-bdbcc6468-k4knd\" (UID: \"db509226-a015-4c26-b8a8-80421cc7d661\") " pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.052146 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c78b8bc9d-8prwv"] Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.087276 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-kztjp"] Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.090883 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.127069 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-kztjp"] Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133376 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-config-data\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133468 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133503 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133599 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-config-data\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133642 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-config\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133666 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-config-data-custom\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133687 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-combined-ca-bundle\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133743 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-config-data-custom\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133767 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqpn\" (UniqueName: \"kubernetes.io/projected/b5738fe3-4560-49bc-b408-13d958fd04e2-kube-api-access-dzqpn\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133821 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgh4\" (UniqueName: \"kubernetes.io/projected/46e3b820-e4ea-46a6-9a98-944bf7718c56-kube-api-access-pkgh4\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133849 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxfd\" (UniqueName: \"kubernetes.io/projected/f9dcf660-e92e-44b6-b940-97d0cccdc187-kube-api-access-shxfd\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133888 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9dcf660-e92e-44b6-b940-97d0cccdc187-logs\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133920 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.133970 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-combined-ca-bundle\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.134004 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e3b820-e4ea-46a6-9a98-944bf7718c56-logs\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.134061 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.138687 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9dcf660-e92e-44b6-b940-97d0cccdc187-logs\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.139611 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e3b820-e4ea-46a6-9a98-944bf7718c56-logs\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.149647 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-config-data-custom\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.150388 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-config-data-custom\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.152473 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-config-data\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.157420 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-config-data\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.163173 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78dd89995d-p2zx6"] Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.165135 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dcf660-e92e-44b6-b940-97d0cccdc187-combined-ca-bundle\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.168728 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxfd\" (UniqueName: \"kubernetes.io/projected/f9dcf660-e92e-44b6-b940-97d0cccdc187-kube-api-access-shxfd\") pod \"barbican-keystone-listener-c78b8bc9d-8prwv\" (UID: \"f9dcf660-e92e-44b6-b940-97d0cccdc187\") " pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.173054 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.177042 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.182063 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgh4\" (UniqueName: \"kubernetes.io/projected/46e3b820-e4ea-46a6-9a98-944bf7718c56-kube-api-access-pkgh4\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.187339 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78dd89995d-p2zx6"] Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.190139 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e3b820-e4ea-46a6-9a98-944bf7718c56-combined-ca-bundle\") pod \"barbican-worker-64c66558f5-qcqwg\" (UID: \"46e3b820-e4ea-46a6-9a98-944bf7718c56\") " pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238034 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238152 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data-custom\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238202 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238276 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238321 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238372 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-config\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238417 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08021cda-119f-413c-86ef-ef64660e60bb-logs\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238458 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238516 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5p2g\" (UniqueName: \"kubernetes.io/projected/08021cda-119f-413c-86ef-ef64660e60bb-kube-api-access-n5p2g\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238555 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqpn\" (UniqueName: \"kubernetes.io/projected/b5738fe3-4560-49bc-b408-13d958fd04e2-kube-api-access-dzqpn\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.238729 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-combined-ca-bundle\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.242067 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.242102 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.243304 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-config\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.243462 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.243882 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.258344 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqpn\" (UniqueName: \"kubernetes.io/projected/b5738fe3-4560-49bc-b408-13d958fd04e2-kube-api-access-dzqpn\") pod \"dnsmasq-dns-7c67bffd47-kztjp\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.268919 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.340717 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data-custom\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.340828 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08021cda-119f-413c-86ef-ef64660e60bb-logs\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.340857 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.340886 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5p2g\" (UniqueName: \"kubernetes.io/projected/08021cda-119f-413c-86ef-ef64660e60bb-kube-api-access-n5p2g\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.340910 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-combined-ca-bundle\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.342873 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08021cda-119f-413c-86ef-ef64660e60bb-logs\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.346768 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-combined-ca-bundle\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.351467 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data-custom\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.354113 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.368628 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5p2g\" (UniqueName: \"kubernetes.io/projected/08021cda-119f-413c-86ef-ef64660e60bb-kube-api-access-n5p2g\") pod \"barbican-api-78dd89995d-p2zx6\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.429643 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.482236 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64c66558f5-qcqwg" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.519270 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.527220 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.594255 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:22 crc kubenswrapper[4731]: I1129 07:26:22.966506 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bdbcc6468-k4knd"] Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.158821 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c78b8bc9d-8prwv"] Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.294552 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64c66558f5-qcqwg"] Nov 29 07:26:23 crc kubenswrapper[4731]: W1129 07:26:23.315022 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46e3b820_e4ea_46a6_9a98_944bf7718c56.slice/crio-7b6f41905ccfb9a080f89285ccafc13e9df1841c797e7106a44502c8485f046e WatchSource:0}: Error finding container 7b6f41905ccfb9a080f89285ccafc13e9df1841c797e7106a44502c8485f046e: Status 404 returned error can't find the container with id 7b6f41905ccfb9a080f89285ccafc13e9df1841c797e7106a44502c8485f046e Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.321307 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78dd89995d-p2zx6"] Nov 29 07:26:23 crc kubenswrapper[4731]: W1129 07:26:23.337681 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08021cda_119f_413c_86ef_ef64660e60bb.slice/crio-c51d16b7e45080c6977a5dc9227d7dc167b0ffa97e87ddc27b4689244d537071 WatchSource:0}: Error finding container c51d16b7e45080c6977a5dc9227d7dc167b0ffa97e87ddc27b4689244d537071: Status 404 returned error can't find the container with id c51d16b7e45080c6977a5dc9227d7dc167b0ffa97e87ddc27b4689244d537071 Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.401872 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-kztjp"] Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.610780 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" event={"ID":"b5738fe3-4560-49bc-b408-13d958fd04e2","Type":"ContainerStarted","Data":"b6a1d1a6a5d82f94cf88065b2d8226374742a0b3f3642c70beb204581893ab37"} Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.616617 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bdbcc6468-k4knd" event={"ID":"db509226-a015-4c26-b8a8-80421cc7d661","Type":"ContainerStarted","Data":"7f73ad9a2f6d1446ac3cfdbc8333d4f066466dc4d64140f9df8fcdbbf318fea7"} Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.621434 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" event={"ID":"f9dcf660-e92e-44b6-b940-97d0cccdc187","Type":"ContainerStarted","Data":"73db170311f2738ffcd99c035104812e82c14817fb2ee22bb95602144a350fb1"} Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.627779 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dd89995d-p2zx6" event={"ID":"08021cda-119f-413c-86ef-ef64660e60bb","Type":"ContainerStarted","Data":"c51d16b7e45080c6977a5dc9227d7dc167b0ffa97e87ddc27b4689244d537071"} Nov 29 07:26:23 crc kubenswrapper[4731]: I1129 07:26:23.631255 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64c66558f5-qcqwg" event={"ID":"46e3b820-e4ea-46a6-9a98-944bf7718c56","Type":"ContainerStarted","Data":"7b6f41905ccfb9a080f89285ccafc13e9df1841c797e7106a44502c8485f046e"} Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.665024 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dd89995d-p2zx6" event={"ID":"08021cda-119f-413c-86ef-ef64660e60bb","Type":"ContainerStarted","Data":"b27838b74420fa94af907398ba11f5743c53de288eefaa06cc51df051e6e8e36"} Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.665737 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dd89995d-p2zx6" event={"ID":"08021cda-119f-413c-86ef-ef64660e60bb","Type":"ContainerStarted","Data":"95de69abac4569052b56b7c30306f1a8741f5d4d25d6219e6c9179f152d923dc"} Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.667497 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.667533 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.682551 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zcx9z" event={"ID":"9af027cc-cbd4-4f3a-ad25-2ef5b126d590","Type":"ContainerStarted","Data":"bb2475f193ed50de45ecd1da5f6d6fad85f593e9dd2586d0dec67678e4586bdc"} Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.690237 4731 generic.go:334] "Generic (PLEG): container finished" podID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerID="73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9" exitCode=0 Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.690342 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" event={"ID":"b5738fe3-4560-49bc-b408-13d958fd04e2","Type":"ContainerDied","Data":"73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9"} Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.700993 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bdbcc6468-k4knd" event={"ID":"db509226-a015-4c26-b8a8-80421cc7d661","Type":"ContainerStarted","Data":"324d080e5c1dd2512155547cc83522720f3054121b7fb9c80d79d959030ac8b9"} Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.701058 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bdbcc6468-k4knd" event={"ID":"db509226-a015-4c26-b8a8-80421cc7d661","Type":"ContainerStarted","Data":"097febc5847616e9db8ce8f57fac2677222a9a323ba761514b2874f576775bb9"} Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.701288 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.701348 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.731482 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78dd89995d-p2zx6" podStartSLOduration=2.731448888 podStartE2EDuration="2.731448888s" podCreationTimestamp="2025-11-29 07:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:24.713238666 +0000 UTC m=+1223.603599759" watchObservedRunningTime="2025-11-29 07:26:24.731448888 +0000 UTC m=+1223.621809991" Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.782734 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-zcx9z" podStartSLOduration=5.105078081 podStartE2EDuration="58.782709435s" podCreationTimestamp="2025-11-29 07:25:26 +0000 UTC" firstStartedPulling="2025-11-29 07:25:28.258244385 +0000 UTC m=+1167.148605488" lastFinishedPulling="2025-11-29 07:26:21.935875739 +0000 UTC m=+1220.826236842" observedRunningTime="2025-11-29 07:26:24.745807717 +0000 UTC m=+1223.636168820" watchObservedRunningTime="2025-11-29 07:26:24.782709435 +0000 UTC m=+1223.673070538" Nov 29 07:26:24 crc kubenswrapper[4731]: I1129 07:26:24.927322 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bdbcc6468-k4knd" podStartSLOduration=3.927291028 podStartE2EDuration="3.927291028s" podCreationTimestamp="2025-11-29 07:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:24.890850383 +0000 UTC m=+1223.781211486" watchObservedRunningTime="2025-11-29 07:26:24.927291028 +0000 UTC m=+1223.817652121" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.063147 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c9b78b974-grr5d"] Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.065280 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.078139 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.078519 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-public-tls-certs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.078604 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-internal-tls-certs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.078710 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f362df0-515f-4aa7-980b-8c418dadcc66-logs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.078741 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-combined-ca-bundle\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.078760 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-config-data-custom\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.078795 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn47r\" (UniqueName: \"kubernetes.io/projected/7f362df0-515f-4aa7-980b-8c418dadcc66-kube-api-access-rn47r\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.078867 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-config-data\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.084347 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.095193 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c9b78b974-grr5d"] Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.182507 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-internal-tls-certs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.182710 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f362df0-515f-4aa7-980b-8c418dadcc66-logs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.182752 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-combined-ca-bundle\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.182784 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-config-data-custom\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.182827 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn47r\" (UniqueName: \"kubernetes.io/projected/7f362df0-515f-4aa7-980b-8c418dadcc66-kube-api-access-rn47r\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.182941 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-config-data\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.182998 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-public-tls-certs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.185919 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f362df0-515f-4aa7-980b-8c418dadcc66-logs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.191627 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-config-data\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.192415 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-combined-ca-bundle\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.198053 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-public-tls-certs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.198460 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-internal-tls-certs\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.198970 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f362df0-515f-4aa7-980b-8c418dadcc66-config-data-custom\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.208041 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn47r\" (UniqueName: \"kubernetes.io/projected/7f362df0-515f-4aa7-980b-8c418dadcc66-kube-api-access-rn47r\") pod \"barbican-api-6c9b78b974-grr5d\" (UID: \"7f362df0-515f-4aa7-980b-8c418dadcc66\") " pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.216808 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.708617 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c9b78b974-grr5d"] Nov 29 07:26:25 crc kubenswrapper[4731]: W1129 07:26:25.722787 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f362df0_515f_4aa7_980b_8c418dadcc66.slice/crio-46801069b6e5007944e12e38201d2d513c9e014ab23a97a7074e98cd4a246dea WatchSource:0}: Error finding container 46801069b6e5007944e12e38201d2d513c9e014ab23a97a7074e98cd4a246dea: Status 404 returned error can't find the container with id 46801069b6e5007944e12e38201d2d513c9e014ab23a97a7074e98cd4a246dea Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.744506 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" event={"ID":"b5738fe3-4560-49bc-b408-13d958fd04e2","Type":"ContainerStarted","Data":"c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f"} Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.750654 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:25 crc kubenswrapper[4731]: I1129 07:26:25.788819 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" podStartSLOduration=4.788784859 podStartE2EDuration="4.788784859s" podCreationTimestamp="2025-11-29 07:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:25.776450559 +0000 UTC m=+1224.666811672" watchObservedRunningTime="2025-11-29 07:26:25.788784859 +0000 UTC m=+1224.679145962" Nov 29 07:26:26 crc kubenswrapper[4731]: I1129 07:26:26.252047 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-84cd78f644-7wncn" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 29 07:26:26 crc kubenswrapper[4731]: I1129 07:26:26.362851 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fcdbcfb48-gmbcm" podUID="3afcf821-ab23-4e13-96e7-2b178314bece" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 29 07:26:26 crc kubenswrapper[4731]: I1129 07:26:26.754297 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9b78b974-grr5d" event={"ID":"7f362df0-515f-4aa7-980b-8c418dadcc66","Type":"ContainerStarted","Data":"46801069b6e5007944e12e38201d2d513c9e014ab23a97a7074e98cd4a246dea"} Nov 29 07:26:27 crc kubenswrapper[4731]: I1129 07:26:27.770558 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9b78b974-grr5d" event={"ID":"7f362df0-515f-4aa7-980b-8c418dadcc66","Type":"ContainerStarted","Data":"776c6eabe1f0f2d97b7e73d213dfbfc367f20c2918f76ffa5bd270879a91b80f"} Nov 29 07:26:29 crc kubenswrapper[4731]: I1129 07:26:29.798668 4731 generic.go:334] "Generic (PLEG): container finished" podID="2d843330-ffae-4bc9-a8b3-c2df891a1aae" containerID="35baaa7729762d17b4d7d6f2de4d3968e88ea07e8ec8701ab4a49abef88ae6f3" exitCode=0 Nov 29 07:26:29 crc kubenswrapper[4731]: I1129 07:26:29.798736 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjjnr" event={"ID":"2d843330-ffae-4bc9-a8b3-c2df891a1aae","Type":"ContainerDied","Data":"35baaa7729762d17b4d7d6f2de4d3968e88ea07e8ec8701ab4a49abef88ae6f3"} Nov 29 07:26:30 crc kubenswrapper[4731]: I1129 07:26:30.812788 4731 generic.go:334] "Generic (PLEG): container finished" podID="9af027cc-cbd4-4f3a-ad25-2ef5b126d590" containerID="bb2475f193ed50de45ecd1da5f6d6fad85f593e9dd2586d0dec67678e4586bdc" exitCode=0 Nov 29 07:26:30 crc kubenswrapper[4731]: I1129 07:26:30.812836 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zcx9z" event={"ID":"9af027cc-cbd4-4f3a-ad25-2ef5b126d590","Type":"ContainerDied","Data":"bb2475f193ed50de45ecd1da5f6d6fad85f593e9dd2586d0dec67678e4586bdc"} Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.546328 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.567394 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8xbf\" (UniqueName: \"kubernetes.io/projected/2d843330-ffae-4bc9-a8b3-c2df891a1aae-kube-api-access-b8xbf\") pod \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.567558 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-combined-ca-bundle\") pod \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.567622 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-config\") pod \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\" (UID: \"2d843330-ffae-4bc9-a8b3-c2df891a1aae\") " Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.605515 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d843330-ffae-4bc9-a8b3-c2df891a1aae-kube-api-access-b8xbf" (OuterVolumeSpecName: "kube-api-access-b8xbf") pod "2d843330-ffae-4bc9-a8b3-c2df891a1aae" (UID: "2d843330-ffae-4bc9-a8b3-c2df891a1aae"). InnerVolumeSpecName "kube-api-access-b8xbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.610423 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d843330-ffae-4bc9-a8b3-c2df891a1aae" (UID: "2d843330-ffae-4bc9-a8b3-c2df891a1aae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.612827 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-config" (OuterVolumeSpecName: "config") pod "2d843330-ffae-4bc9-a8b3-c2df891a1aae" (UID: "2d843330-ffae-4bc9-a8b3-c2df891a1aae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.669509 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.669549 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8xbf\" (UniqueName: \"kubernetes.io/projected/2d843330-ffae-4bc9-a8b3-c2df891a1aae-kube-api-access-b8xbf\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.669584 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d843330-ffae-4bc9-a8b3-c2df891a1aae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.863873 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjjnr" Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.864913 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjjnr" event={"ID":"2d843330-ffae-4bc9-a8b3-c2df891a1aae","Type":"ContainerDied","Data":"5dd0997503a6a7b4b9561a35602e3d122669eb1e4c78578724bc4a3f8110fe5d"} Nov 29 07:26:31 crc kubenswrapper[4731]: I1129 07:26:31.864972 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dd0997503a6a7b4b9561a35602e3d122669eb1e4c78578724bc4a3f8110fe5d" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.096653 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-kztjp"] Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.097218 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" podUID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerName="dnsmasq-dns" containerID="cri-o://c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f" gracePeriod=10 Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.100241 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.209496 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-jcxn9"] Nov 29 07:26:32 crc kubenswrapper[4731]: E1129 07:26:32.209984 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d843330-ffae-4bc9-a8b3-c2df891a1aae" containerName="neutron-db-sync" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.209996 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d843330-ffae-4bc9-a8b3-c2df891a1aae" containerName="neutron-db-sync" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.210192 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d843330-ffae-4bc9-a8b3-c2df891a1aae" containerName="neutron-db-sync" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.211166 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.269646 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-jcxn9"] Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.280447 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66b9c88964-2rnsc"] Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.290730 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.292729 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.292768 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-config\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.292790 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.292824 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.292858 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nfh8\" (UniqueName: \"kubernetes.io/projected/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-kube-api-access-9nfh8\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.292923 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.294187 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.294465 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.296136 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.296503 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xmjfp" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.343640 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66b9c88964-2rnsc"] Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396629 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396694 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-combined-ca-bundle\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396733 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396795 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-config\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396820 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396858 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396878 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swbs6\" (UniqueName: \"kubernetes.io/projected/56d6dd27-1657-4460-8dc9-cb18176d395a-kube-api-access-swbs6\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396905 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-httpd-config\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396937 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nfh8\" (UniqueName: \"kubernetes.io/projected/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-kube-api-access-9nfh8\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.396956 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-config\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.397016 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-ovndb-tls-certs\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.397747 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.397924 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.398351 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-config\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.398983 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.403471 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.406997 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.445623 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nfh8\" (UniqueName: \"kubernetes.io/projected/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-kube-api-access-9nfh8\") pod \"dnsmasq-dns-848cf88cfc-jcxn9\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.500272 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh9fs\" (UniqueName: \"kubernetes.io/projected/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-kube-api-access-qh9fs\") pod \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.500859 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-scripts\") pod \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.501025 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-combined-ca-bundle\") pod \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.501103 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-etc-machine-id\") pod \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.501140 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-config-data\") pod \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.501198 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-db-sync-config-data\") pod \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.501202 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9af027cc-cbd4-4f3a-ad25-2ef5b126d590" (UID: "9af027cc-cbd4-4f3a-ad25-2ef5b126d590"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.501743 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-ovndb-tls-certs\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.501850 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-combined-ca-bundle\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.501970 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swbs6\" (UniqueName: \"kubernetes.io/projected/56d6dd27-1657-4460-8dc9-cb18176d395a-kube-api-access-swbs6\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.502013 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-httpd-config\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.502060 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-config\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.502195 4731 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.524868 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-ovndb-tls-certs\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.525318 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-httpd-config\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.525910 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swbs6\" (UniqueName: \"kubernetes.io/projected/56d6dd27-1657-4460-8dc9-cb18176d395a-kube-api-access-swbs6\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.526713 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-combined-ca-bundle\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.562830 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-config\") pod \"neutron-66b9c88964-2rnsc\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.572038 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-kube-api-access-qh9fs" (OuterVolumeSpecName: "kube-api-access-qh9fs") pod "9af027cc-cbd4-4f3a-ad25-2ef5b126d590" (UID: "9af027cc-cbd4-4f3a-ad25-2ef5b126d590"). InnerVolumeSpecName "kube-api-access-qh9fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.598327 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-scripts" (OuterVolumeSpecName: "scripts") pod "9af027cc-cbd4-4f3a-ad25-2ef5b126d590" (UID: "9af027cc-cbd4-4f3a-ad25-2ef5b126d590"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.605033 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh9fs\" (UniqueName: \"kubernetes.io/projected/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-kube-api-access-qh9fs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.605101 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.638548 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9af027cc-cbd4-4f3a-ad25-2ef5b126d590" (UID: "9af027cc-cbd4-4f3a-ad25-2ef5b126d590"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.648200 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.687011 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.709382 4731 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.777864 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9af027cc-cbd4-4f3a-ad25-2ef5b126d590" (UID: "9af027cc-cbd4-4f3a-ad25-2ef5b126d590"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.811339 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-config-data" (OuterVolumeSpecName: "config-data") pod "9af027cc-cbd4-4f3a-ad25-2ef5b126d590" (UID: "9af027cc-cbd4-4f3a-ad25-2ef5b126d590"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.811706 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-config-data\") pod \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\" (UID: \"9af027cc-cbd4-4f3a-ad25-2ef5b126d590\") " Nov 29 07:26:32 crc kubenswrapper[4731]: W1129 07:26:32.811923 4731 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9af027cc-cbd4-4f3a-ad25-2ef5b126d590/volumes/kubernetes.io~secret/config-data Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.811942 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-config-data" (OuterVolumeSpecName: "config-data") pod "9af027cc-cbd4-4f3a-ad25-2ef5b126d590" (UID: "9af027cc-cbd4-4f3a-ad25-2ef5b126d590"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.812224 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.812250 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af027cc-cbd4-4f3a-ad25-2ef5b126d590-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.909841 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" event={"ID":"f9dcf660-e92e-44b6-b940-97d0cccdc187","Type":"ContainerStarted","Data":"2cfc10c799b22c43d9db2dc4bf6e26bbaf56942873cea265cfec14720f8e06a2"} Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.910321 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" event={"ID":"f9dcf660-e92e-44b6-b940-97d0cccdc187","Type":"ContainerStarted","Data":"2a88f0c6dfddb53ce5957a004e8d07c7e005a413ea2e5964dcb7ec3bd7b50fcb"} Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.925022 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9b78b974-grr5d" event={"ID":"7f362df0-515f-4aa7-980b-8c418dadcc66","Type":"ContainerStarted","Data":"a891604ff0fcabda2426fa3f33b8df83de73ac9c6993c955ec0b269aa6c6b6b7"} Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.926522 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.926556 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.928456 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.946250 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerStarted","Data":"2be644685516713cfb001d79eb74922e37dcf97269991ac10a48b3c405eb1563"} Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.946535 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="ceilometer-central-agent" containerID="cri-o://80484dabef96a2b4305c712d0f5c21f7e5f78598851160f53d0ecdd920e12b6c" gracePeriod=30 Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.948656 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.948760 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="proxy-httpd" containerID="cri-o://2be644685516713cfb001d79eb74922e37dcf97269991ac10a48b3c405eb1563" gracePeriod=30 Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.948843 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="sg-core" containerID="cri-o://efc3c73ee1eb2172d2e8d30dc832513f4552861add81034a4aa6c8e1afe42474" gracePeriod=30 Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.948913 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="ceilometer-notification-agent" containerID="cri-o://1eeb175d06560dac595e603e5a440b5bc074e50a82fcd13e045b0876b3180be8" gracePeriod=30 Nov 29 07:26:32 crc kubenswrapper[4731]: I1129 07:26:32.955701 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c78b8bc9d-8prwv" podStartSLOduration=3.7242358810000002 podStartE2EDuration="11.955673412s" podCreationTimestamp="2025-11-29 07:26:21 +0000 UTC" firstStartedPulling="2025-11-29 07:26:23.175412061 +0000 UTC m=+1222.065773164" lastFinishedPulling="2025-11-29 07:26:31.406849582 +0000 UTC m=+1230.297210695" observedRunningTime="2025-11-29 07:26:32.937294923 +0000 UTC m=+1231.827656026" watchObservedRunningTime="2025-11-29 07:26:32.955673412 +0000 UTC m=+1231.846034515" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.020352 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzqpn\" (UniqueName: \"kubernetes.io/projected/b5738fe3-4560-49bc-b408-13d958fd04e2-kube-api-access-dzqpn\") pod \"b5738fe3-4560-49bc-b408-13d958fd04e2\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.020459 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-nb\") pod \"b5738fe3-4560-49bc-b408-13d958fd04e2\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.020513 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-config\") pod \"b5738fe3-4560-49bc-b408-13d958fd04e2\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.020544 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-sb\") pod \"b5738fe3-4560-49bc-b408-13d958fd04e2\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.020626 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-swift-storage-0\") pod \"b5738fe3-4560-49bc-b408-13d958fd04e2\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.020790 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-svc\") pod \"b5738fe3-4560-49bc-b408-13d958fd04e2\" (UID: \"b5738fe3-4560-49bc-b408-13d958fd04e2\") " Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.028703 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64c66558f5-qcqwg" event={"ID":"46e3b820-e4ea-46a6-9a98-944bf7718c56","Type":"ContainerStarted","Data":"5180256fec578cb009d8f916b0dad3af9f65d1cc7bae2535d2b4eb7c80ac4245"} Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.028773 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64c66558f5-qcqwg" event={"ID":"46e3b820-e4ea-46a6-9a98-944bf7718c56","Type":"ContainerStarted","Data":"15f06c82060e79d864941b5387591d36ca86d70d83ab48a728a4059c8a61e2d7"} Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.067461 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zcx9z" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.067841 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zcx9z" event={"ID":"9af027cc-cbd4-4f3a-ad25-2ef5b126d590","Type":"ContainerDied","Data":"3d88a9039cb5d56b1c701e6b7cef4e3898d10233c100b9cc94ca8f66f68cb19c"} Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.073984 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d88a9039cb5d56b1c701e6b7cef4e3898d10233c100b9cc94ca8f66f68cb19c" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.075596 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c9b78b974-grr5d" podStartSLOduration=8.07557733 podStartE2EDuration="8.07557733s" podCreationTimestamp="2025-11-29 07:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:33.047302477 +0000 UTC m=+1231.937663580" watchObservedRunningTime="2025-11-29 07:26:33.07557733 +0000 UTC m=+1231.965938433" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.106259 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-64c66558f5-qcqwg" podStartSLOduration=4.018006124 podStartE2EDuration="12.106233442s" podCreationTimestamp="2025-11-29 07:26:21 +0000 UTC" firstStartedPulling="2025-11-29 07:26:23.323790625 +0000 UTC m=+1222.214151728" lastFinishedPulling="2025-11-29 07:26:31.412017943 +0000 UTC m=+1230.302379046" observedRunningTime="2025-11-29 07:26:33.10478157 +0000 UTC m=+1231.995142673" watchObservedRunningTime="2025-11-29 07:26:33.106233442 +0000 UTC m=+1231.996594545" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.122775 4731 generic.go:334] "Generic (PLEG): container finished" podID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerID="c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f" exitCode=0 Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.122868 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" event={"ID":"b5738fe3-4560-49bc-b408-13d958fd04e2","Type":"ContainerDied","Data":"c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f"} Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.122906 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" event={"ID":"b5738fe3-4560-49bc-b408-13d958fd04e2","Type":"ContainerDied","Data":"b6a1d1a6a5d82f94cf88065b2d8226374742a0b3f3642c70beb204581893ab37"} Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.122929 4731 scope.go:117] "RemoveContainer" containerID="c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.123135 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.130147 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5738fe3-4560-49bc-b408-13d958fd04e2-kube-api-access-dzqpn" (OuterVolumeSpecName: "kube-api-access-dzqpn") pod "b5738fe3-4560-49bc-b408-13d958fd04e2" (UID: "b5738fe3-4560-49bc-b408-13d958fd04e2"). InnerVolumeSpecName "kube-api-access-dzqpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.192944 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.91757204 podStartE2EDuration="1m7.192917484s" podCreationTimestamp="2025-11-29 07:25:26 +0000 UTC" firstStartedPulling="2025-11-29 07:25:29.276756611 +0000 UTC m=+1168.167117714" lastFinishedPulling="2025-11-29 07:26:31.552102055 +0000 UTC m=+1230.442463158" observedRunningTime="2025-11-29 07:26:33.167800571 +0000 UTC m=+1232.058161674" watchObservedRunningTime="2025-11-29 07:26:33.192917484 +0000 UTC m=+1232.083278587" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.235169 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzqpn\" (UniqueName: \"kubernetes.io/projected/b5738fe3-4560-49bc-b408-13d958fd04e2-kube-api-access-dzqpn\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.255624 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:33 crc kubenswrapper[4731]: E1129 07:26:33.256168 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerName="init" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.256186 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerName="init" Nov 29 07:26:33 crc kubenswrapper[4731]: E1129 07:26:33.256200 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af027cc-cbd4-4f3a-ad25-2ef5b126d590" containerName="cinder-db-sync" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.256207 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af027cc-cbd4-4f3a-ad25-2ef5b126d590" containerName="cinder-db-sync" Nov 29 07:26:33 crc kubenswrapper[4731]: E1129 07:26:33.256254 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerName="dnsmasq-dns" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.256261 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerName="dnsmasq-dns" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.256519 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerName="dnsmasq-dns" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.256540 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af027cc-cbd4-4f3a-ad25-2ef5b126d590" containerName="cinder-db-sync" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.258413 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.272793 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9dbfp" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.273094 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.273256 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.273408 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.282296 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5738fe3-4560-49bc-b408-13d958fd04e2" (UID: "b5738fe3-4560-49bc-b408-13d958fd04e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.319702 4731 scope.go:117] "RemoveContainer" containerID="73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.337872 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.371739 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.380836 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5738fe3-4560-49bc-b408-13d958fd04e2" (UID: "b5738fe3-4560-49bc-b408-13d958fd04e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.468005 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-jcxn9"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.470037 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.470106 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e6d89c1-88bc-4ac6-815c-e06e157bc096-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.470180 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx869\" (UniqueName: \"kubernetes.io/projected/2e6d89c1-88bc-4ac6-815c-e06e157bc096-kube-api-access-dx869\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.470225 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.470268 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-scripts\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.470399 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.470529 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.474242 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b5738fe3-4560-49bc-b408-13d958fd04e2" (UID: "b5738fe3-4560-49bc-b408-13d958fd04e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.478290 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-config" (OuterVolumeSpecName: "config") pod "b5738fe3-4560-49bc-b408-13d958fd04e2" (UID: "b5738fe3-4560-49bc-b408-13d958fd04e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.478463 4731 scope.go:117] "RemoveContainer" containerID="c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f" Nov 29 07:26:33 crc kubenswrapper[4731]: E1129 07:26:33.503521 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f\": container with ID starting with c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f not found: ID does not exist" containerID="c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.504011 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f"} err="failed to get container status \"c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f\": rpc error: code = NotFound desc = could not find container \"c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f\": container with ID starting with c407c47de10042424bbc1ec15c59e71d48d90bd8966c5797b5adeca92e58000f not found: ID does not exist" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.504136 4731 scope.go:117] "RemoveContainer" containerID="73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.507381 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5738fe3-4560-49bc-b408-13d958fd04e2" (UID: "b5738fe3-4560-49bc-b408-13d958fd04e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:33 crc kubenswrapper[4731]: E1129 07:26:33.519892 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9\": container with ID starting with 73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9 not found: ID does not exist" containerID="73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.519966 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9"} err="failed to get container status \"73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9\": rpc error: code = NotFound desc = could not find container \"73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9\": container with ID starting with 73f0002524686d5db3d7776b80f0f328c39e76d8eb8822d2d85a1474049da6e9 not found: ID does not exist" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.571947 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx869\" (UniqueName: \"kubernetes.io/projected/2e6d89c1-88bc-4ac6-815c-e06e157bc096-kube-api-access-dx869\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572008 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572053 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-scripts\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572128 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572202 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572223 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e6d89c1-88bc-4ac6-815c-e06e157bc096-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572278 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572290 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572300 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5738fe3-4560-49bc-b408-13d958fd04e2-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.572440 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e6d89c1-88bc-4ac6-815c-e06e157bc096-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.579235 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.588652 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.591649 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-scripts\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.599108 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.606462 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx869\" (UniqueName: \"kubernetes.io/projected/2e6d89c1-88bc-4ac6-815c-e06e157bc096-kube-api-access-dx869\") pod \"cinder-scheduler-0\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.619376 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-ljnvk"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.629702 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.640270 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-ljnvk"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.664693 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-jcxn9"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.665507 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.673092 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.675847 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.678680 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.707455 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783421 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-logs\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783493 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783576 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783639 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2swz\" (UniqueName: \"kubernetes.io/projected/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-kube-api-access-v2swz\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783673 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-config\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783694 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data-custom\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783712 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-scripts\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783757 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-svc\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783785 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/9eee44fb-eee4-4aa9-9a6f-680039d29c74-kube-api-access-kplw7\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783805 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783826 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783845 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.783864 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.805827 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-kztjp"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.844784 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-kztjp"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885418 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885535 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885552 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2swz\" (UniqueName: \"kubernetes.io/projected/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-kube-api-access-v2swz\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885604 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-config\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885626 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data-custom\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885644 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-scripts\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885707 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-svc\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885724 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/9eee44fb-eee4-4aa9-9a6f-680039d29c74-kube-api-access-kplw7\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885745 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885767 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885787 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885807 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.885827 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-logs\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.887067 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.887790 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.888750 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-svc\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.889641 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-config\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.890437 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.890826 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-logs\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.892685 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.899479 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data-custom\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.907613 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.914301 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.915056 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66b9c88964-2rnsc"] Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.915519 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-scripts\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.918309 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/9eee44fb-eee4-4aa9-9a6f-680039d29c74-kube-api-access-kplw7\") pod \"dnsmasq-dns-6578955fd5-ljnvk\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:33 crc kubenswrapper[4731]: I1129 07:26:33.943808 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2swz\" (UniqueName: \"kubernetes.io/projected/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-kube-api-access-v2swz\") pod \"cinder-api-0\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " pod="openstack/cinder-api-0" Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.005291 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.021522 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.213159 4731 generic.go:334] "Generic (PLEG): container finished" podID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerID="2be644685516713cfb001d79eb74922e37dcf97269991ac10a48b3c405eb1563" exitCode=0 Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.213611 4731 generic.go:334] "Generic (PLEG): container finished" podID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerID="efc3c73ee1eb2172d2e8d30dc832513f4552861add81034a4aa6c8e1afe42474" exitCode=2 Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.213656 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerDied","Data":"2be644685516713cfb001d79eb74922e37dcf97269991ac10a48b3c405eb1563"} Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.213684 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerDied","Data":"efc3c73ee1eb2172d2e8d30dc832513f4552861add81034a4aa6c8e1afe42474"} Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.222862 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" event={"ID":"7b4e2237-3b7e-437e-8029-7c7b8228d7cf","Type":"ContainerStarted","Data":"1c617d3af56b0dcd2c5e85050aaa6efab4413de13d920376ca8449f4fe6d1a5f"} Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.222919 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" event={"ID":"7b4e2237-3b7e-437e-8029-7c7b8228d7cf","Type":"ContainerStarted","Data":"ab96cad3b1344effbff3e14255a506da9af346f1f4c7fc698bd90d24da61f2cd"} Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.227197 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c88964-2rnsc" event={"ID":"56d6dd27-1657-4460-8dc9-cb18176d395a","Type":"ContainerStarted","Data":"e5c29a65c39deb7fe8edf06fa90a5502017c276a2423f914116a937ee5b89306"} Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.275205 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:34 crc kubenswrapper[4731]: W1129 07:26:34.301041 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e6d89c1_88bc_4ac6_815c_e06e157bc096.slice/crio-983767b61ca6b4dee50476dbb8c3a21312e23c755e76db549f99086766f0532d WatchSource:0}: Error finding container 983767b61ca6b4dee50476dbb8c3a21312e23c755e76db549f99086766f0532d: Status 404 returned error can't find the container with id 983767b61ca6b4dee50476dbb8c3a21312e23c755e76db549f99086766f0532d Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.613516 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:34 crc kubenswrapper[4731]: W1129 07:26:34.646523 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1804b1ab_caff_4bb2_96ba_27f3927d8ac3.slice/crio-ec6bbd347876cd43c9925f6bf72a956031ff9cbcac08e60913a528724bb412a1 WatchSource:0}: Error finding container ec6bbd347876cd43c9925f6bf72a956031ff9cbcac08e60913a528724bb412a1: Status 404 returned error can't find the container with id ec6bbd347876cd43c9925f6bf72a956031ff9cbcac08e60913a528724bb412a1 Nov 29 07:26:34 crc kubenswrapper[4731]: I1129 07:26:34.690421 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-ljnvk"] Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.254769 4731 generic.go:334] "Generic (PLEG): container finished" podID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerID="80484dabef96a2b4305c712d0f5c21f7e5f78598851160f53d0ecdd920e12b6c" exitCode=0 Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.255394 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerDied","Data":"80484dabef96a2b4305c712d0f5c21f7e5f78598851160f53d0ecdd920e12b6c"} Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.264173 4731 generic.go:334] "Generic (PLEG): container finished" podID="7b4e2237-3b7e-437e-8029-7c7b8228d7cf" containerID="1c617d3af56b0dcd2c5e85050aaa6efab4413de13d920376ca8449f4fe6d1a5f" exitCode=0 Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.264391 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" event={"ID":"7b4e2237-3b7e-437e-8029-7c7b8228d7cf","Type":"ContainerDied","Data":"1c617d3af56b0dcd2c5e85050aaa6efab4413de13d920376ca8449f4fe6d1a5f"} Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.271280 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" event={"ID":"9eee44fb-eee4-4aa9-9a6f-680039d29c74","Type":"ContainerStarted","Data":"6e1a8f2431f90e0c58899e95c99e81110014270be8eb27e0c575600813eab8ba"} Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.277127 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c88964-2rnsc" event={"ID":"56d6dd27-1657-4460-8dc9-cb18176d395a","Type":"ContainerStarted","Data":"a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e"} Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.277171 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c88964-2rnsc" event={"ID":"56d6dd27-1657-4460-8dc9-cb18176d395a","Type":"ContainerStarted","Data":"992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758"} Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.277417 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.283197 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1804b1ab-caff-4bb2-96ba-27f3927d8ac3","Type":"ContainerStarted","Data":"ec6bbd347876cd43c9925f6bf72a956031ff9cbcac08e60913a528724bb412a1"} Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.295394 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.300321 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e6d89c1-88bc-4ac6-815c-e06e157bc096","Type":"ContainerStarted","Data":"983767b61ca6b4dee50476dbb8c3a21312e23c755e76db549f99086766f0532d"} Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.338221 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66b9c88964-2rnsc" podStartSLOduration=3.338201061 podStartE2EDuration="3.338201061s" podCreationTimestamp="2025-11-29 07:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:35.337293315 +0000 UTC m=+1234.227654418" watchObservedRunningTime="2025-11-29 07:26:35.338201061 +0000 UTC m=+1234.228562164" Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.683174 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.809557 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.832636 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5738fe3-4560-49bc-b408-13d958fd04e2" path="/var/lib/kubelet/pods/b5738fe3-4560-49bc-b408-13d958fd04e2/volumes" Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.876953 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-svc\") pod \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.877131 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-config\") pod \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.877347 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-nb\") pod \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.877396 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-sb\") pod \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.877482 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-swift-storage-0\") pod \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.877639 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nfh8\" (UniqueName: \"kubernetes.io/projected/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-kube-api-access-9nfh8\") pod \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\" (UID: \"7b4e2237-3b7e-437e-8029-7c7b8228d7cf\") " Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.887830 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-kube-api-access-9nfh8" (OuterVolumeSpecName: "kube-api-access-9nfh8") pod "7b4e2237-3b7e-437e-8029-7c7b8228d7cf" (UID: "7b4e2237-3b7e-437e-8029-7c7b8228d7cf"). InnerVolumeSpecName "kube-api-access-9nfh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.984062 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nfh8\" (UniqueName: \"kubernetes.io/projected/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-kube-api-access-9nfh8\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:35 crc kubenswrapper[4731]: I1129 07:26:35.993506 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b4e2237-3b7e-437e-8029-7c7b8228d7cf" (UID: "7b4e2237-3b7e-437e-8029-7c7b8228d7cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.002367 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b4e2237-3b7e-437e-8029-7c7b8228d7cf" (UID: "7b4e2237-3b7e-437e-8029-7c7b8228d7cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.034476 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-config" (OuterVolumeSpecName: "config") pod "7b4e2237-3b7e-437e-8029-7c7b8228d7cf" (UID: "7b4e2237-3b7e-437e-8029-7c7b8228d7cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.034533 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b4e2237-3b7e-437e-8029-7c7b8228d7cf" (UID: "7b4e2237-3b7e-437e-8029-7c7b8228d7cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.034658 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7b4e2237-3b7e-437e-8029-7c7b8228d7cf" (UID: "7b4e2237-3b7e-437e-8029-7c7b8228d7cf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.086262 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.086662 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.086673 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.086684 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.086703 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b4e2237-3b7e-437e-8029-7c7b8228d7cf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.321495 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.321710 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-jcxn9" event={"ID":"7b4e2237-3b7e-437e-8029-7c7b8228d7cf","Type":"ContainerDied","Data":"ab96cad3b1344effbff3e14255a506da9af346f1f4c7fc698bd90d24da61f2cd"} Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.321778 4731 scope.go:117] "RemoveContainer" containerID="1c617d3af56b0dcd2c5e85050aaa6efab4413de13d920376ca8449f4fe6d1a5f" Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.327190 4731 generic.go:334] "Generic (PLEG): container finished" podID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" containerID="597bbb99bcf87b0ca429cd09e8416acfd698685284d7efcecd488aeb6c696509" exitCode=0 Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.329316 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" event={"ID":"9eee44fb-eee4-4aa9-9a6f-680039d29c74","Type":"ContainerDied","Data":"597bbb99bcf87b0ca429cd09e8416acfd698685284d7efcecd488aeb6c696509"} Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.494194 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-jcxn9"] Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.535281 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:36 crc kubenswrapper[4731]: I1129 07:26:36.558536 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-jcxn9"] Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.358142 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.371009 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" event={"ID":"9eee44fb-eee4-4aa9-9a6f-680039d29c74","Type":"ContainerStarted","Data":"6d0cb6056364a1288593f2cbeece05032788ee6a74c0eb18185296a1f26d2934"} Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.374660 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.393290 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1804b1ab-caff-4bb2-96ba-27f3927d8ac3","Type":"ContainerStarted","Data":"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192"} Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.410073 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e6d89c1-88bc-4ac6-815c-e06e157bc096","Type":"ContainerStarted","Data":"61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066"} Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.416661 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" podStartSLOduration=4.416636415 podStartE2EDuration="4.416636415s" podCreationTimestamp="2025-11-29 07:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:37.412603639 +0000 UTC m=+1236.302964742" watchObservedRunningTime="2025-11-29 07:26:37.416636415 +0000 UTC m=+1236.306997528" Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.521343 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c67bffd47-kztjp" podUID="b5738fe3-4560-49bc-b408-13d958fd04e2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.575667 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78dd89995d-p2zx6" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.767365 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:37 crc kubenswrapper[4731]: I1129 07:26:37.829003 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4e2237-3b7e-437e-8029-7c7b8228d7cf" path="/var/lib/kubelet/pods/7b4e2237-3b7e-437e-8029-7c7b8228d7cf/volumes" Nov 29 07:26:38 crc kubenswrapper[4731]: I1129 07:26:38.427153 4731 generic.go:334] "Generic (PLEG): container finished" podID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerID="1eeb175d06560dac595e603e5a440b5bc074e50a82fcd13e045b0876b3180be8" exitCode=0 Nov 29 07:26:38 crc kubenswrapper[4731]: I1129 07:26:38.427353 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerDied","Data":"1eeb175d06560dac595e603e5a440b5bc074e50a82fcd13e045b0876b3180be8"} Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.593731 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.734668 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-558fbdd7b9-2w7vs"] Nov 29 07:26:39 crc kubenswrapper[4731]: E1129 07:26:39.735702 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4e2237-3b7e-437e-8029-7c7b8228d7cf" containerName="init" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.735728 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4e2237-3b7e-437e-8029-7c7b8228d7cf" containerName="init" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.735988 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4e2237-3b7e-437e-8029-7c7b8228d7cf" containerName="init" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.737335 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.740457 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.740758 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.748983 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-internal-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.749138 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-config\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.749165 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-combined-ca-bundle\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.749185 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-httpd-config\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.749244 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-public-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.749270 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-ovndb-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.749350 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkrss\" (UniqueName: \"kubernetes.io/projected/961096e3-fc62-4b26-a9de-1036f08b0fa0-kube-api-access-jkrss\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.761100 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-558fbdd7b9-2w7vs"] Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.851658 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-combined-ca-bundle\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.851724 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-httpd-config\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.851773 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-public-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.851812 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-ovndb-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.851898 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkrss\" (UniqueName: \"kubernetes.io/projected/961096e3-fc62-4b26-a9de-1036f08b0fa0-kube-api-access-jkrss\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.851955 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-internal-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.852042 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-config\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.868577 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-public-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.869463 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-combined-ca-bundle\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.879619 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-config\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.881507 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-ovndb-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.883131 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-httpd-config\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.896063 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/961096e3-fc62-4b26-a9de-1036f08b0fa0-internal-tls-certs\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:39 crc kubenswrapper[4731]: I1129 07:26:39.897523 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkrss\" (UniqueName: \"kubernetes.io/projected/961096e3-fc62-4b26-a9de-1036f08b0fa0-kube-api-access-jkrss\") pod \"neutron-558fbdd7b9-2w7vs\" (UID: \"961096e3-fc62-4b26-a9de-1036f08b0fa0\") " pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.000955 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.112950 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.339164 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.377119 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-scripts\") pod \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.377184 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-sg-core-conf-yaml\") pod \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.377257 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-config-data\") pod \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.377298 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-run-httpd\") pod \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.377333 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-combined-ca-bundle\") pod \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.377398 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhhpr\" (UniqueName: \"kubernetes.io/projected/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-kube-api-access-qhhpr\") pod \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.377445 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-log-httpd\") pod \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\" (UID: \"93f84d51-daf8-4c30-ba2c-e5d8aff3432c\") " Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.378252 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "93f84d51-daf8-4c30-ba2c-e5d8aff3432c" (UID: "93f84d51-daf8-4c30-ba2c-e5d8aff3432c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.378533 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "93f84d51-daf8-4c30-ba2c-e5d8aff3432c" (UID: "93f84d51-daf8-4c30-ba2c-e5d8aff3432c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.440861 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-kube-api-access-qhhpr" (OuterVolumeSpecName: "kube-api-access-qhhpr") pod "93f84d51-daf8-4c30-ba2c-e5d8aff3432c" (UID: "93f84d51-daf8-4c30-ba2c-e5d8aff3432c"). InnerVolumeSpecName "kube-api-access-qhhpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.462055 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-scripts" (OuterVolumeSpecName: "scripts") pod "93f84d51-daf8-4c30-ba2c-e5d8aff3432c" (UID: "93f84d51-daf8-4c30-ba2c-e5d8aff3432c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.462815 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "93f84d51-daf8-4c30-ba2c-e5d8aff3432c" (UID: "93f84d51-daf8-4c30-ba2c-e5d8aff3432c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.493464 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.493537 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhhpr\" (UniqueName: \"kubernetes.io/projected/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-kube-api-access-qhhpr\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.493668 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.493681 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.493693 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.515644 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e6d89c1-88bc-4ac6-815c-e06e157bc096","Type":"ContainerStarted","Data":"f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7"} Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.520132 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f84d51-daf8-4c30-ba2c-e5d8aff3432c","Type":"ContainerDied","Data":"67ddea95ca4c467d78bbe82656f436a23274e9ce491484f0cca64bb254d3ceb9"} Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.520188 4731 scope.go:117] "RemoveContainer" containerID="2be644685516713cfb001d79eb74922e37dcf97269991ac10a48b3c405eb1563" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.520400 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.532913 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-config-data" (OuterVolumeSpecName: "config-data") pod "93f84d51-daf8-4c30-ba2c-e5d8aff3432c" (UID: "93f84d51-daf8-4c30-ba2c-e5d8aff3432c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.547513 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1804b1ab-caff-4bb2-96ba-27f3927d8ac3","Type":"ContainerStarted","Data":"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87"} Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.547755 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerName="cinder-api-log" containerID="cri-o://1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192" gracePeriod=30 Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.548057 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.548064 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerName="cinder-api" containerID="cri-o://c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87" gracePeriod=30 Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.571947 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.465338984 podStartE2EDuration="7.571923579s" podCreationTimestamp="2025-11-29 07:26:33 +0000 UTC" firstStartedPulling="2025-11-29 07:26:34.307110327 +0000 UTC m=+1233.197471430" lastFinishedPulling="2025-11-29 07:26:35.413694922 +0000 UTC m=+1234.304056025" observedRunningTime="2025-11-29 07:26:40.556123165 +0000 UTC m=+1239.446484268" watchObservedRunningTime="2025-11-29 07:26:40.571923579 +0000 UTC m=+1239.462284672" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.585723 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93f84d51-daf8-4c30-ba2c-e5d8aff3432c" (UID: "93f84d51-daf8-4c30-ba2c-e5d8aff3432c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.587781 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.587759525 podStartE2EDuration="7.587759525s" podCreationTimestamp="2025-11-29 07:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:40.578988782 +0000 UTC m=+1239.469349885" watchObservedRunningTime="2025-11-29 07:26:40.587759525 +0000 UTC m=+1239.478120628" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.595185 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.596220 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f84d51-daf8-4c30-ba2c-e5d8aff3432c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.605035 4731 scope.go:117] "RemoveContainer" containerID="efc3c73ee1eb2172d2e8d30dc832513f4552861add81034a4aa6c8e1afe42474" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.639477 4731 scope.go:117] "RemoveContainer" containerID="1eeb175d06560dac595e603e5a440b5bc074e50a82fcd13e045b0876b3180be8" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.701260 4731 scope.go:117] "RemoveContainer" containerID="80484dabef96a2b4305c712d0f5c21f7e5f78598851160f53d0ecdd920e12b6c" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.783089 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c9b78b974-grr5d" Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.814903 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-558fbdd7b9-2w7vs"] Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.872391 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78dd89995d-p2zx6"] Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.872662 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78dd89995d-p2zx6" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api-log" containerID="cri-o://95de69abac4569052b56b7c30306f1a8741f5d4d25d6219e6c9179f152d923dc" gracePeriod=30 Nov 29 07:26:40 crc kubenswrapper[4731]: I1129 07:26:40.873226 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78dd89995d-p2zx6" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api" containerID="cri-o://b27838b74420fa94af907398ba11f5743c53de288eefaa06cc51df051e6e8e36" gracePeriod=30 Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.180028 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.212864 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.232636 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:41 crc kubenswrapper[4731]: E1129 07:26:41.233205 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="ceilometer-central-agent" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.233232 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="ceilometer-central-agent" Nov 29 07:26:41 crc kubenswrapper[4731]: E1129 07:26:41.233277 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="ceilometer-notification-agent" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.233288 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="ceilometer-notification-agent" Nov 29 07:26:41 crc kubenswrapper[4731]: E1129 07:26:41.233318 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="proxy-httpd" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.233326 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="proxy-httpd" Nov 29 07:26:41 crc kubenswrapper[4731]: E1129 07:26:41.233336 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="sg-core" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.233343 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="sg-core" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.233589 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="ceilometer-central-agent" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.233617 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="ceilometer-notification-agent" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.233638 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="sg-core" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.233651 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" containerName="proxy-httpd" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.235820 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.239816 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.239907 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.254147 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.321283 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6px9\" (UniqueName: \"kubernetes.io/projected/ad9b3a1d-2698-405e-b94a-45d96efd0400-kube-api-access-v6px9\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.322096 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.326136 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-log-httpd\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.326200 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-run-httpd\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.326242 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-config-data\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.326267 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.326337 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-scripts\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.384946 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.428399 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-combined-ca-bundle\") pod \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.428475 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2swz\" (UniqueName: \"kubernetes.io/projected/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-kube-api-access-v2swz\") pod \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.428688 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-logs\") pod \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.428826 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data-custom\") pod \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.428852 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-scripts\") pod \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.428902 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-etc-machine-id\") pod \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.428950 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data\") pod \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\" (UID: \"1804b1ab-caff-4bb2-96ba-27f3927d8ac3\") " Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.430240 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-logs" (OuterVolumeSpecName: "logs") pod "1804b1ab-caff-4bb2-96ba-27f3927d8ac3" (UID: "1804b1ab-caff-4bb2-96ba-27f3927d8ac3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.430716 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.430791 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-log-httpd\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.430851 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-run-httpd\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.430897 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-config-data\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.430922 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.430989 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-scripts\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.431056 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6px9\" (UniqueName: \"kubernetes.io/projected/ad9b3a1d-2698-405e-b94a-45d96efd0400-kube-api-access-v6px9\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.431155 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.431581 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1804b1ab-caff-4bb2-96ba-27f3927d8ac3" (UID: "1804b1ab-caff-4bb2-96ba-27f3927d8ac3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.433291 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-run-httpd\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.435308 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-log-httpd\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.446636 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-scripts" (OuterVolumeSpecName: "scripts") pod "1804b1ab-caff-4bb2-96ba-27f3927d8ac3" (UID: "1804b1ab-caff-4bb2-96ba-27f3927d8ac3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.461851 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.464544 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-kube-api-access-v2swz" (OuterVolumeSpecName: "kube-api-access-v2swz") pod "1804b1ab-caff-4bb2-96ba-27f3927d8ac3" (UID: "1804b1ab-caff-4bb2-96ba-27f3927d8ac3"). InnerVolumeSpecName "kube-api-access-v2swz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.464825 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-scripts\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.465701 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.470201 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1804b1ab-caff-4bb2-96ba-27f3927d8ac3" (UID: "1804b1ab-caff-4bb2-96ba-27f3927d8ac3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.479450 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6px9\" (UniqueName: \"kubernetes.io/projected/ad9b3a1d-2698-405e-b94a-45d96efd0400-kube-api-access-v6px9\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.480523 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-config-data\") pod \"ceilometer-0\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.502634 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1804b1ab-caff-4bb2-96ba-27f3927d8ac3" (UID: "1804b1ab-caff-4bb2-96ba-27f3927d8ac3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.532640 4731 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.532686 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.532698 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2swz\" (UniqueName: \"kubernetes.io/projected/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-kube-api-access-v2swz\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.532711 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.532721 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.555782 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data" (OuterVolumeSpecName: "config-data") pod "1804b1ab-caff-4bb2-96ba-27f3927d8ac3" (UID: "1804b1ab-caff-4bb2-96ba-27f3927d8ac3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.562860 4731 generic.go:334] "Generic (PLEG): container finished" podID="08021cda-119f-413c-86ef-ef64660e60bb" containerID="95de69abac4569052b56b7c30306f1a8741f5d4d25d6219e6c9179f152d923dc" exitCode=143 Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.562937 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dd89995d-p2zx6" event={"ID":"08021cda-119f-413c-86ef-ef64660e60bb","Type":"ContainerDied","Data":"95de69abac4569052b56b7c30306f1a8741f5d4d25d6219e6c9179f152d923dc"} Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.569022 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-558fbdd7b9-2w7vs" event={"ID":"961096e3-fc62-4b26-a9de-1036f08b0fa0","Type":"ContainerStarted","Data":"719c224cd5a83c665ac62935cacfc11ab910e4129200f98bcbd940866abf4ed3"} Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.569102 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-558fbdd7b9-2w7vs" event={"ID":"961096e3-fc62-4b26-a9de-1036f08b0fa0","Type":"ContainerStarted","Data":"1e87ba09f960d082822e44e94d22f15dd2dd30796ab5cc27b9660a2cd5aac299"} Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.573260 4731 generic.go:334] "Generic (PLEG): container finished" podID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerID="c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87" exitCode=0 Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.573293 4731 generic.go:334] "Generic (PLEG): container finished" podID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerID="1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192" exitCode=143 Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.573334 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.573363 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1804b1ab-caff-4bb2-96ba-27f3927d8ac3","Type":"ContainerDied","Data":"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87"} Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.573404 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1804b1ab-caff-4bb2-96ba-27f3927d8ac3","Type":"ContainerDied","Data":"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192"} Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.573416 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1804b1ab-caff-4bb2-96ba-27f3927d8ac3","Type":"ContainerDied","Data":"ec6bbd347876cd43c9925f6bf72a956031ff9cbcac08e60913a528724bb412a1"} Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.573433 4731 scope.go:117] "RemoveContainer" containerID="c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.583126 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.635592 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1804b1ab-caff-4bb2-96ba-27f3927d8ac3-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.636762 4731 scope.go:117] "RemoveContainer" containerID="1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.637097 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.652286 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.665638 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:41 crc kubenswrapper[4731]: E1129 07:26:41.666166 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerName="cinder-api" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.666192 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerName="cinder-api" Nov 29 07:26:41 crc kubenswrapper[4731]: E1129 07:26:41.666218 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerName="cinder-api-log" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.666227 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerName="cinder-api-log" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.666480 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerName="cinder-api-log" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.666511 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" containerName="cinder-api" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.667952 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.675493 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.675859 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.676016 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.677050 4731 scope.go:117] "RemoveContainer" containerID="c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87" Nov 29 07:26:41 crc kubenswrapper[4731]: E1129 07:26:41.690422 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87\": container with ID starting with c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87 not found: ID does not exist" containerID="c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.690768 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87"} err="failed to get container status \"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87\": rpc error: code = NotFound desc = could not find container \"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87\": container with ID starting with c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87 not found: ID does not exist" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.690880 4731 scope.go:117] "RemoveContainer" containerID="1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192" Nov 29 07:26:41 crc kubenswrapper[4731]: E1129 07:26:41.698107 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192\": container with ID starting with 1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192 not found: ID does not exist" containerID="1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.698277 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192"} err="failed to get container status \"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192\": rpc error: code = NotFound desc = could not find container \"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192\": container with ID starting with 1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192 not found: ID does not exist" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.698385 4731 scope.go:117] "RemoveContainer" containerID="c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.700418 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87"} err="failed to get container status \"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87\": rpc error: code = NotFound desc = could not find container \"c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87\": container with ID starting with c8cbc18a51e49b0394df63940b2b713b627e6c49b1d44d1c0c8467c5c03f2b87 not found: ID does not exist" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.700554 4731 scope.go:117] "RemoveContainer" containerID="1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.705781 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192"} err="failed to get container status \"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192\": rpc error: code = NotFound desc = could not find container \"1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192\": container with ID starting with 1cd716aa1c7986de2549fa9bea7a64e655aef899b92de1648e88ddc6e4258192 not found: ID does not exist" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.713869 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741055 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-config-data\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741140 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhpt\" (UniqueName: \"kubernetes.io/projected/585d388f-8639-4f82-815c-f500254f0169-kube-api-access-fxhpt\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741194 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/585d388f-8639-4f82-815c-f500254f0169-etc-machine-id\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741222 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585d388f-8639-4f82-815c-f500254f0169-logs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741254 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-config-data-custom\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741320 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-public-tls-certs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741360 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-scripts\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741436 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.741525 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843314 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843437 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843490 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-config-data\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843525 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxhpt\" (UniqueName: \"kubernetes.io/projected/585d388f-8639-4f82-815c-f500254f0169-kube-api-access-fxhpt\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843601 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/585d388f-8639-4f82-815c-f500254f0169-etc-machine-id\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843641 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585d388f-8639-4f82-815c-f500254f0169-logs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843681 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-config-data-custom\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843824 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-public-tls-certs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.843843 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-scripts\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.845509 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/585d388f-8639-4f82-815c-f500254f0169-etc-machine-id\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.845977 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585d388f-8639-4f82-815c-f500254f0169-logs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.858648 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.860279 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-scripts\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.862978 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-config-data\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.867386 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.868346 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-public-tls-certs\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.869531 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/585d388f-8639-4f82-815c-f500254f0169-config-data-custom\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.869704 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1804b1ab-caff-4bb2-96ba-27f3927d8ac3" path="/var/lib/kubelet/pods/1804b1ab-caff-4bb2-96ba-27f3927d8ac3/volumes" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.883730 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f84d51-daf8-4c30-ba2c-e5d8aff3432c" path="/var/lib/kubelet/pods/93f84d51-daf8-4c30-ba2c-e5d8aff3432c/volumes" Nov 29 07:26:41 crc kubenswrapper[4731]: I1129 07:26:41.888346 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxhpt\" (UniqueName: \"kubernetes.io/projected/585d388f-8639-4f82-815c-f500254f0169-kube-api-access-fxhpt\") pod \"cinder-api-0\" (UID: \"585d388f-8639-4f82-815c-f500254f0169\") " pod="openstack/cinder-api-0" Nov 29 07:26:42 crc kubenswrapper[4731]: I1129 07:26:42.047092 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:26:42 crc kubenswrapper[4731]: I1129 07:26:42.269428 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:26:42 crc kubenswrapper[4731]: W1129 07:26:42.292480 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad9b3a1d_2698_405e_b94a_45d96efd0400.slice/crio-2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511 WatchSource:0}: Error finding container 2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511: Status 404 returned error can't find the container with id 2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511 Nov 29 07:26:42 crc kubenswrapper[4731]: I1129 07:26:42.586342 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-558fbdd7b9-2w7vs" event={"ID":"961096e3-fc62-4b26-a9de-1036f08b0fa0","Type":"ContainerStarted","Data":"e1f28eaf9c066a4a980fcfae46bc660e153642294f6c7ace41d71eb6f6e93616"} Nov 29 07:26:42 crc kubenswrapper[4731]: I1129 07:26:42.588144 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:26:42 crc kubenswrapper[4731]: I1129 07:26:42.592217 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerStarted","Data":"2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511"} Nov 29 07:26:42 crc kubenswrapper[4731]: I1129 07:26:42.639170 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-558fbdd7b9-2w7vs" podStartSLOduration=3.639143322 podStartE2EDuration="3.639143322s" podCreationTimestamp="2025-11-29 07:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:42.624204722 +0000 UTC m=+1241.514565825" watchObservedRunningTime="2025-11-29 07:26:42.639143322 +0000 UTC m=+1241.529504425" Nov 29 07:26:42 crc kubenswrapper[4731]: I1129 07:26:42.927618 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5fcdbcfb48-gmbcm" Nov 29 07:26:42 crc kubenswrapper[4731]: I1129 07:26:42.935340 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:26:43 crc kubenswrapper[4731]: I1129 07:26:43.017623 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84cd78f644-7wncn"] Nov 29 07:26:43 crc kubenswrapper[4731]: I1129 07:26:43.177981 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:26:43 crc kubenswrapper[4731]: I1129 07:26:43.609371 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"585d388f-8639-4f82-815c-f500254f0169","Type":"ContainerStarted","Data":"3d16a7287aa2fe31d8b8b981f14b73533ec32c73d8d54c9b73c3cee2667eb8d9"} Nov 29 07:26:43 crc kubenswrapper[4731]: I1129 07:26:43.610104 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84cd78f644-7wncn" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon-log" containerID="cri-o://27fa026eb4be33e0970601908f2bd67b51eec9bb4bd79b5ad9e662b251422727" gracePeriod=30 Nov 29 07:26:43 crc kubenswrapper[4731]: I1129 07:26:43.610280 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84cd78f644-7wncn" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" containerID="cri-o://233af86133f61c225ab9848a8308c125fc186329b7b7974a653e06432e81629a" gracePeriod=30 Nov 29 07:26:43 crc kubenswrapper[4731]: I1129 07:26:43.666846 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 07:26:43 crc kubenswrapper[4731]: I1129 07:26:43.948226 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.007803 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.074835 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hvz7k"] Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.075479 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" podUID="895a9751-f534-47b7-8e60-f10a608dd46e" containerName="dnsmasq-dns" containerID="cri-o://fe4d2c837150a4ae0517636ae9cbc6c2e19f532dc3107d1a96b1f6ed4ee240ea" gracePeriod=10 Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.133496 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78dd89995d-p2zx6" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:53884->10.217.0.156:9311: read: connection reset by peer" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.133523 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78dd89995d-p2zx6" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:53882->10.217.0.156:9311: read: connection reset by peer" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.628153 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerStarted","Data":"b0a52e0399e92e5901d134016b38e43a3062ae712d3ebd55795179617041415e"} Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.628746 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerStarted","Data":"e91a79e3be372608d66f3d15b49b5bc742d17829ae0c02924fc76c5a8bcb4bc0"} Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.640842 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"585d388f-8639-4f82-815c-f500254f0169","Type":"ContainerStarted","Data":"53f6d5007643b976e708b200f0c3b8ba8d65e633e788aec796cb5f9f5eee91df"} Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.647299 4731 generic.go:334] "Generic (PLEG): container finished" podID="08021cda-119f-413c-86ef-ef64660e60bb" containerID="b27838b74420fa94af907398ba11f5743c53de288eefaa06cc51df051e6e8e36" exitCode=0 Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.647387 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dd89995d-p2zx6" event={"ID":"08021cda-119f-413c-86ef-ef64660e60bb","Type":"ContainerDied","Data":"b27838b74420fa94af907398ba11f5743c53de288eefaa06cc51df051e6e8e36"} Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.660497 4731 generic.go:334] "Generic (PLEG): container finished" podID="895a9751-f534-47b7-8e60-f10a608dd46e" containerID="fe4d2c837150a4ae0517636ae9cbc6c2e19f532dc3107d1a96b1f6ed4ee240ea" exitCode=0 Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.663471 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" event={"ID":"895a9751-f534-47b7-8e60-f10a608dd46e","Type":"ContainerDied","Data":"fe4d2c837150a4ae0517636ae9cbc6c2e19f532dc3107d1a96b1f6ed4ee240ea"} Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.781959 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.821548 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.827634 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889069 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data\") pod \"08021cda-119f-413c-86ef-ef64660e60bb\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889137 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-sb\") pod \"895a9751-f534-47b7-8e60-f10a608dd46e\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889163 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-nb\") pod \"895a9751-f534-47b7-8e60-f10a608dd46e\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889329 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-combined-ca-bundle\") pod \"08021cda-119f-413c-86ef-ef64660e60bb\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889355 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-config\") pod \"895a9751-f534-47b7-8e60-f10a608dd46e\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889395 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data-custom\") pod \"08021cda-119f-413c-86ef-ef64660e60bb\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889429 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhgtk\" (UniqueName: \"kubernetes.io/projected/895a9751-f534-47b7-8e60-f10a608dd46e-kube-api-access-mhgtk\") pod \"895a9751-f534-47b7-8e60-f10a608dd46e\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889531 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08021cda-119f-413c-86ef-ef64660e60bb-logs\") pod \"08021cda-119f-413c-86ef-ef64660e60bb\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889600 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-swift-storage-0\") pod \"895a9751-f534-47b7-8e60-f10a608dd46e\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889637 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-svc\") pod \"895a9751-f534-47b7-8e60-f10a608dd46e\" (UID: \"895a9751-f534-47b7-8e60-f10a608dd46e\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.889661 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5p2g\" (UniqueName: \"kubernetes.io/projected/08021cda-119f-413c-86ef-ef64660e60bb-kube-api-access-n5p2g\") pod \"08021cda-119f-413c-86ef-ef64660e60bb\" (UID: \"08021cda-119f-413c-86ef-ef64660e60bb\") " Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.902760 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08021cda-119f-413c-86ef-ef64660e60bb-logs" (OuterVolumeSpecName: "logs") pod "08021cda-119f-413c-86ef-ef64660e60bb" (UID: "08021cda-119f-413c-86ef-ef64660e60bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.921484 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08021cda-119f-413c-86ef-ef64660e60bb" (UID: "08021cda-119f-413c-86ef-ef64660e60bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.925366 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/895a9751-f534-47b7-8e60-f10a608dd46e-kube-api-access-mhgtk" (OuterVolumeSpecName: "kube-api-access-mhgtk") pod "895a9751-f534-47b7-8e60-f10a608dd46e" (UID: "895a9751-f534-47b7-8e60-f10a608dd46e"). InnerVolumeSpecName "kube-api-access-mhgtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:44 crc kubenswrapper[4731]: I1129 07:26:44.943390 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08021cda-119f-413c-86ef-ef64660e60bb-kube-api-access-n5p2g" (OuterVolumeSpecName: "kube-api-access-n5p2g") pod "08021cda-119f-413c-86ef-ef64660e60bb" (UID: "08021cda-119f-413c-86ef-ef64660e60bb"). InnerVolumeSpecName "kube-api-access-n5p2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.000136 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.000183 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhgtk\" (UniqueName: \"kubernetes.io/projected/895a9751-f534-47b7-8e60-f10a608dd46e-kube-api-access-mhgtk\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.000199 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08021cda-119f-413c-86ef-ef64660e60bb-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.000209 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5p2g\" (UniqueName: \"kubernetes.io/projected/08021cda-119f-413c-86ef-ef64660e60bb-kube-api-access-n5p2g\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.014077 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08021cda-119f-413c-86ef-ef64660e60bb" (UID: "08021cda-119f-413c-86ef-ef64660e60bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.037468 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "895a9751-f534-47b7-8e60-f10a608dd46e" (UID: "895a9751-f534-47b7-8e60-f10a608dd46e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.039360 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "895a9751-f534-47b7-8e60-f10a608dd46e" (UID: "895a9751-f534-47b7-8e60-f10a608dd46e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.040868 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "895a9751-f534-47b7-8e60-f10a608dd46e" (UID: "895a9751-f534-47b7-8e60-f10a608dd46e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.041013 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data" (OuterVolumeSpecName: "config-data") pod "08021cda-119f-413c-86ef-ef64660e60bb" (UID: "08021cda-119f-413c-86ef-ef64660e60bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.043988 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "895a9751-f534-47b7-8e60-f10a608dd46e" (UID: "895a9751-f534-47b7-8e60-f10a608dd46e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.064271 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-config" (OuterVolumeSpecName: "config") pod "895a9751-f534-47b7-8e60-f10a608dd46e" (UID: "895a9751-f534-47b7-8e60-f10a608dd46e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.103139 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.103201 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.103215 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.103226 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.103239 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.103311 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08021cda-119f-413c-86ef-ef64660e60bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.103327 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895a9751-f534-47b7-8e60-f10a608dd46e-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.674383 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dd89995d-p2zx6" event={"ID":"08021cda-119f-413c-86ef-ef64660e60bb","Type":"ContainerDied","Data":"c51d16b7e45080c6977a5dc9227d7dc167b0ffa97e87ddc27b4689244d537071"} Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.674804 4731 scope.go:117] "RemoveContainer" containerID="b27838b74420fa94af907398ba11f5743c53de288eefaa06cc51df051e6e8e36" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.674966 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78dd89995d-p2zx6" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.686096 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.686102 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hvz7k" event={"ID":"895a9751-f534-47b7-8e60-f10a608dd46e","Type":"ContainerDied","Data":"4d939dfa0b549cb5f486f0331c2da87cfe3c72a05046c9fa4250c851d96fb539"} Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.692238 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerStarted","Data":"1a60dda543bef3c02c14d1a2355f7d2d9cdca5a86d7a91b25c100b231d4256d7"} Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.694669 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"585d388f-8639-4f82-815c-f500254f0169","Type":"ContainerStarted","Data":"29644f65eed3634f410588178c0dbeb323ff19fd22bbb99020f2f8a149df8942"} Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.695083 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.695141 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerName="probe" containerID="cri-o://f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7" gracePeriod=30 Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.695087 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerName="cinder-scheduler" containerID="cri-o://61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066" gracePeriod=30 Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.703840 4731 scope.go:117] "RemoveContainer" containerID="95de69abac4569052b56b7c30306f1a8741f5d4d25d6219e6c9179f152d923dc" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.735875 4731 scope.go:117] "RemoveContainer" containerID="fe4d2c837150a4ae0517636ae9cbc6c2e19f532dc3107d1a96b1f6ed4ee240ea" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.738636 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.73860216 podStartE2EDuration="4.73860216s" podCreationTimestamp="2025-11-29 07:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:45.728207501 +0000 UTC m=+1244.618568614" watchObservedRunningTime="2025-11-29 07:26:45.73860216 +0000 UTC m=+1244.628963263" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.770293 4731 scope.go:117] "RemoveContainer" containerID="2e96fa5d1c2d853dec4e655f3e30d33b9946b56e0446032f1e5eb5eaf12a104e" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.779824 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hvz7k"] Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.789386 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hvz7k"] Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.796983 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78dd89995d-p2zx6"] Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.805496 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78dd89995d-p2zx6"] Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.824350 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08021cda-119f-413c-86ef-ef64660e60bb" path="/var/lib/kubelet/pods/08021cda-119f-413c-86ef-ef64660e60bb/volumes" Nov 29 07:26:45 crc kubenswrapper[4731]: I1129 07:26:45.825005 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="895a9751-f534-47b7-8e60-f10a608dd46e" path="/var/lib/kubelet/pods/895a9751-f534-47b7-8e60-f10a608dd46e/volumes" Nov 29 07:26:46 crc kubenswrapper[4731]: I1129 07:26:46.879122 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84cd78f644-7wncn" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:49264->10.217.0.146:8443: read: connection reset by peer" Nov 29 07:26:47 crc kubenswrapper[4731]: I1129 07:26:47.742065 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerStarted","Data":"98d5fb68ac4f81058db1b8fb8c7d28dbece676e4f45ba00fbb4d44e5acc8f9e4"} Nov 29 07:26:47 crc kubenswrapper[4731]: I1129 07:26:47.743302 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:26:47 crc kubenswrapper[4731]: I1129 07:26:47.744064 4731 generic.go:334] "Generic (PLEG): container finished" podID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerID="233af86133f61c225ab9848a8308c125fc186329b7b7974a653e06432e81629a" exitCode=0 Nov 29 07:26:47 crc kubenswrapper[4731]: I1129 07:26:47.744145 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84cd78f644-7wncn" event={"ID":"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e","Type":"ContainerDied","Data":"233af86133f61c225ab9848a8308c125fc186329b7b7974a653e06432e81629a"} Nov 29 07:26:47 crc kubenswrapper[4731]: I1129 07:26:47.747051 4731 generic.go:334] "Generic (PLEG): container finished" podID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerID="f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7" exitCode=0 Nov 29 07:26:47 crc kubenswrapper[4731]: I1129 07:26:47.747117 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e6d89c1-88bc-4ac6-815c-e06e157bc096","Type":"ContainerDied","Data":"f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7"} Nov 29 07:26:47 crc kubenswrapper[4731]: I1129 07:26:47.774825 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.414713196 podStartE2EDuration="6.774793269s" podCreationTimestamp="2025-11-29 07:26:41 +0000 UTC" firstStartedPulling="2025-11-29 07:26:42.305422494 +0000 UTC m=+1241.195783597" lastFinishedPulling="2025-11-29 07:26:46.665502567 +0000 UTC m=+1245.555863670" observedRunningTime="2025-11-29 07:26:47.767042786 +0000 UTC m=+1246.657403889" watchObservedRunningTime="2025-11-29 07:26:47.774793269 +0000 UTC m=+1246.665154372" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.518156 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.531392 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data-custom\") pod \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.531473 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx869\" (UniqueName: \"kubernetes.io/projected/2e6d89c1-88bc-4ac6-815c-e06e157bc096-kube-api-access-dx869\") pod \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.531674 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e6d89c1-88bc-4ac6-815c-e06e157bc096-etc-machine-id\") pod \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.531966 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e6d89c1-88bc-4ac6-815c-e06e157bc096-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2e6d89c1-88bc-4ac6-815c-e06e157bc096" (UID: "2e6d89c1-88bc-4ac6-815c-e06e157bc096"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.534976 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-combined-ca-bundle\") pod \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.535025 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data\") pod \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.535173 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-scripts\") pod \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\" (UID: \"2e6d89c1-88bc-4ac6-815c-e06e157bc096\") " Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.536293 4731 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e6d89c1-88bc-4ac6-815c-e06e157bc096-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.541741 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e6d89c1-88bc-4ac6-815c-e06e157bc096" (UID: "2e6d89c1-88bc-4ac6-815c-e06e157bc096"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.544734 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6d89c1-88bc-4ac6-815c-e06e157bc096-kube-api-access-dx869" (OuterVolumeSpecName: "kube-api-access-dx869") pod "2e6d89c1-88bc-4ac6-815c-e06e157bc096" (UID: "2e6d89c1-88bc-4ac6-815c-e06e157bc096"). InnerVolumeSpecName "kube-api-access-dx869". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.567036 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-scripts" (OuterVolumeSpecName: "scripts") pod "2e6d89c1-88bc-4ac6-815c-e06e157bc096" (UID: "2e6d89c1-88bc-4ac6-815c-e06e157bc096"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.638364 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.638401 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.638415 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx869\" (UniqueName: \"kubernetes.io/projected/2e6d89c1-88bc-4ac6-815c-e06e157bc096-kube-api-access-dx869\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.645511 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e6d89c1-88bc-4ac6-815c-e06e157bc096" (UID: "2e6d89c1-88bc-4ac6-815c-e06e157bc096"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.678309 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data" (OuterVolumeSpecName: "config-data") pod "2e6d89c1-88bc-4ac6-815c-e06e157bc096" (UID: "2e6d89c1-88bc-4ac6-815c-e06e157bc096"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.744479 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.745037 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d89c1-88bc-4ac6-815c-e06e157bc096-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.783485 4731 generic.go:334] "Generic (PLEG): container finished" podID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerID="61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066" exitCode=0 Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.783555 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e6d89c1-88bc-4ac6-815c-e06e157bc096","Type":"ContainerDied","Data":"61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066"} Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.783616 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e6d89c1-88bc-4ac6-815c-e06e157bc096","Type":"ContainerDied","Data":"983767b61ca6b4dee50476dbb8c3a21312e23c755e76db549f99086766f0532d"} Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.783641 4731 scope.go:117] "RemoveContainer" containerID="f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.783845 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.811008 4731 scope.go:117] "RemoveContainer" containerID="61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.841651 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.845828 4731 scope.go:117] "RemoveContainer" containerID="f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7" Nov 29 07:26:50 crc kubenswrapper[4731]: E1129 07:26:50.847004 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7\": container with ID starting with f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7 not found: ID does not exist" containerID="f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.847080 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7"} err="failed to get container status \"f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7\": rpc error: code = NotFound desc = could not find container \"f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7\": container with ID starting with f81452d826d86cc133b187c0d298019616b714bedc04f222c8adf5af375586f7 not found: ID does not exist" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.847125 4731 scope.go:117] "RemoveContainer" containerID="61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066" Nov 29 07:26:50 crc kubenswrapper[4731]: E1129 07:26:50.850293 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066\": container with ID starting with 61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066 not found: ID does not exist" containerID="61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.850346 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066"} err="failed to get container status \"61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066\": rpc error: code = NotFound desc = could not find container \"61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066\": container with ID starting with 61f7f217983ac3d669915a45d126daeb46297ec4ad950481382a221fe3d57066 not found: ID does not exist" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.866773 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.896319 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:50 crc kubenswrapper[4731]: E1129 07:26:50.896951 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.896968 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api" Nov 29 07:26:50 crc kubenswrapper[4731]: E1129 07:26:50.897014 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerName="probe" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897020 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerName="probe" Nov 29 07:26:50 crc kubenswrapper[4731]: E1129 07:26:50.897034 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895a9751-f534-47b7-8e60-f10a608dd46e" containerName="init" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897041 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="895a9751-f534-47b7-8e60-f10a608dd46e" containerName="init" Nov 29 07:26:50 crc kubenswrapper[4731]: E1129 07:26:50.897050 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api-log" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897056 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api-log" Nov 29 07:26:50 crc kubenswrapper[4731]: E1129 07:26:50.897069 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerName="cinder-scheduler" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897075 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerName="cinder-scheduler" Nov 29 07:26:50 crc kubenswrapper[4731]: E1129 07:26:50.897094 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895a9751-f534-47b7-8e60-f10a608dd46e" containerName="dnsmasq-dns" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897101 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="895a9751-f534-47b7-8e60-f10a608dd46e" containerName="dnsmasq-dns" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897315 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="895a9751-f534-47b7-8e60-f10a608dd46e" containerName="dnsmasq-dns" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897339 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerName="cinder-scheduler" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897346 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897355 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" containerName="probe" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.897371 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="08021cda-119f-413c-86ef-ef64660e60bb" containerName="barbican-api-log" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.898662 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.899758 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.901534 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.949427 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68d82\" (UniqueName: \"kubernetes.io/projected/15f82353-6105-4eb5-b791-dadbd7e2171f-kube-api-access-68d82\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.949858 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15f82353-6105-4eb5-b791-dadbd7e2171f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.950021 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.950188 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-scripts\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.950281 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-config-data\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:50 crc kubenswrapper[4731]: I1129 07:26:50.950383 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.052111 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-config-data\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.052200 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.052263 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68d82\" (UniqueName: \"kubernetes.io/projected/15f82353-6105-4eb5-b791-dadbd7e2171f-kube-api-access-68d82\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.052309 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15f82353-6105-4eb5-b791-dadbd7e2171f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.052355 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.052421 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-scripts\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.053389 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15f82353-6105-4eb5-b791-dadbd7e2171f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.059230 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-config-data\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.059580 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.062666 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-scripts\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.063075 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15f82353-6105-4eb5-b791-dadbd7e2171f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.071839 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74694f6999-x4dvv" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.072895 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68d82\" (UniqueName: \"kubernetes.io/projected/15f82353-6105-4eb5-b791-dadbd7e2171f-kube-api-access-68d82\") pod \"cinder-scheduler-0\" (UID: \"15f82353-6105-4eb5-b791-dadbd7e2171f\") " pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.220396 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.721230 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:26:51 crc kubenswrapper[4731]: W1129 07:26:51.731895 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15f82353_6105_4eb5_b791_dadbd7e2171f.slice/crio-ac4ce949b9dd4a8cd2fae2da29433a4d357c4fd8674bb54d21db7a98092f4a67 WatchSource:0}: Error finding container ac4ce949b9dd4a8cd2fae2da29433a4d357c4fd8674bb54d21db7a98092f4a67: Status 404 returned error can't find the container with id ac4ce949b9dd4a8cd2fae2da29433a4d357c4fd8674bb54d21db7a98092f4a67 Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.852823 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e6d89c1-88bc-4ac6-815c-e06e157bc096" path="/var/lib/kubelet/pods/2e6d89c1-88bc-4ac6-815c-e06e157bc096/volumes" Nov 29 07:26:51 crc kubenswrapper[4731]: I1129 07:26:51.868653 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"15f82353-6105-4eb5-b791-dadbd7e2171f","Type":"ContainerStarted","Data":"ac4ce949b9dd4a8cd2fae2da29433a4d357c4fd8674bb54d21db7a98092f4a67"} Nov 29 07:26:52 crc kubenswrapper[4731]: I1129 07:26:52.858386 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"15f82353-6105-4eb5-b791-dadbd7e2171f","Type":"ContainerStarted","Data":"4f7c5e59fb4120a64eb2acce9fe0e492fa36485031d160b8a814545799cf1560"} Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.632048 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.634067 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.641236 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.641361 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.641687 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qs4bz" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.651841 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.723551 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.723965 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.724010 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pf7t\" (UniqueName: \"kubernetes.io/projected/29975a61-a787-4a38-a109-3e7cae8ed917-kube-api-access-7pf7t\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.724374 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config-secret\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.752198 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.827447 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config-secret\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.827559 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.827753 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.827857 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pf7t\" (UniqueName: \"kubernetes.io/projected/29975a61-a787-4a38-a109-3e7cae8ed917-kube-api-access-7pf7t\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.832013 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.837742 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config-secret\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.851526 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.859596 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pf7t\" (UniqueName: \"kubernetes.io/projected/29975a61-a787-4a38-a109-3e7cae8ed917-kube-api-access-7pf7t\") pod \"openstackclient\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " pod="openstack/openstackclient" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.866785 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bdbcc6468-k4knd" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.882509 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"15f82353-6105-4eb5-b791-dadbd7e2171f","Type":"ContainerStarted","Data":"c8f55149847008dde7ed47d7dbde8e318e5c9cffad1fd78a81ece6a5c20d1776"} Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.939829 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.939700438 podStartE2EDuration="3.939700438s" podCreationTimestamp="2025-11-29 07:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:26:53.924427079 +0000 UTC m=+1252.814788192" watchObservedRunningTime="2025-11-29 07:26:53.939700438 +0000 UTC m=+1252.830061561" Nov 29 07:26:53 crc kubenswrapper[4731]: I1129 07:26:53.961656 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.116616 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.130082 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.161878 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.163678 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.185082 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.237803 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpt2j\" (UniqueName: \"kubernetes.io/projected/f31d074a-cf1e-488e-9816-8cc25ab12d7f-kube-api-access-mpt2j\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.237918 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31d074a-cf1e-488e-9816-8cc25ab12d7f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.238097 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f31d074a-cf1e-488e-9816-8cc25ab12d7f-openstack-config\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.238154 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f31d074a-cf1e-488e-9816-8cc25ab12d7f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.341467 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f31d074a-cf1e-488e-9816-8cc25ab12d7f-openstack-config\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.341599 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f31d074a-cf1e-488e-9816-8cc25ab12d7f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.341744 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpt2j\" (UniqueName: \"kubernetes.io/projected/f31d074a-cf1e-488e-9816-8cc25ab12d7f-kube-api-access-mpt2j\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.341777 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31d074a-cf1e-488e-9816-8cc25ab12d7f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.342476 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f31d074a-cf1e-488e-9816-8cc25ab12d7f-openstack-config\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.349632 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f31d074a-cf1e-488e-9816-8cc25ab12d7f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.363827 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpt2j\" (UniqueName: \"kubernetes.io/projected/f31d074a-cf1e-488e-9816-8cc25ab12d7f-kube-api-access-mpt2j\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.369276 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31d074a-cf1e-488e-9816-8cc25ab12d7f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f31d074a-cf1e-488e-9816-8cc25ab12d7f\") " pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: E1129 07:26:54.427459 4731 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 29 07:26:54 crc kubenswrapper[4731]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_29975a61-a787-4a38-a109-3e7cae8ed917_0(9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48" Netns:"/var/run/netns/89b743d2-2c2a-450c-88d6-0f4af077d9db" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48;K8S_POD_UID=29975a61-a787-4a38-a109-3e7cae8ed917" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/29975a61-a787-4a38-a109-3e7cae8ed917:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48 network default NAD default] [openstack/openstackclient 9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48 network default NAD default] pod deleted before sandbox ADD operation began Nov 29 07:26:54 crc kubenswrapper[4731]: ' Nov 29 07:26:54 crc kubenswrapper[4731]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 29 07:26:54 crc kubenswrapper[4731]: > Nov 29 07:26:54 crc kubenswrapper[4731]: E1129 07:26:54.427554 4731 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 29 07:26:54 crc kubenswrapper[4731]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_29975a61-a787-4a38-a109-3e7cae8ed917_0(9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48" Netns:"/var/run/netns/89b743d2-2c2a-450c-88d6-0f4af077d9db" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48;K8S_POD_UID=29975a61-a787-4a38-a109-3e7cae8ed917" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/29975a61-a787-4a38-a109-3e7cae8ed917:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48 network default NAD default] [openstack/openstackclient 9c8f4bf0473ded2583b6ad6ea05e90b1e2189e888c642ddd246bfceb4de6df48 network default NAD default] pod deleted before sandbox ADD operation began Nov 29 07:26:54 crc kubenswrapper[4731]: ' Nov 29 07:26:54 crc kubenswrapper[4731]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 29 07:26:54 crc kubenswrapper[4731]: > pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.497631 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.896694 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.903489 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="29975a61-a787-4a38-a109-3e7cae8ed917" podUID="f31d074a-cf1e-488e-9816-8cc25ab12d7f" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.913296 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.961297 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pf7t\" (UniqueName: \"kubernetes.io/projected/29975a61-a787-4a38-a109-3e7cae8ed917-kube-api-access-7pf7t\") pod \"29975a61-a787-4a38-a109-3e7cae8ed917\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.961488 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config\") pod \"29975a61-a787-4a38-a109-3e7cae8ed917\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.961660 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-combined-ca-bundle\") pod \"29975a61-a787-4a38-a109-3e7cae8ed917\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.961694 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config-secret\") pod \"29975a61-a787-4a38-a109-3e7cae8ed917\" (UID: \"29975a61-a787-4a38-a109-3e7cae8ed917\") " Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.962375 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "29975a61-a787-4a38-a109-3e7cae8ed917" (UID: "29975a61-a787-4a38-a109-3e7cae8ed917"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.969984 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "29975a61-a787-4a38-a109-3e7cae8ed917" (UID: "29975a61-a787-4a38-a109-3e7cae8ed917"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.971040 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29975a61-a787-4a38-a109-3e7cae8ed917-kube-api-access-7pf7t" (OuterVolumeSpecName: "kube-api-access-7pf7t") pod "29975a61-a787-4a38-a109-3e7cae8ed917" (UID: "29975a61-a787-4a38-a109-3e7cae8ed917"). InnerVolumeSpecName "kube-api-access-7pf7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.980370 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29975a61-a787-4a38-a109-3e7cae8ed917" (UID: "29975a61-a787-4a38-a109-3e7cae8ed917"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:54 crc kubenswrapper[4731]: I1129 07:26:54.996013 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.068656 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.068701 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.068712 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29975a61-a787-4a38-a109-3e7cae8ed917-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.068721 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pf7t\" (UniqueName: \"kubernetes.io/projected/29975a61-a787-4a38-a109-3e7cae8ed917-kube-api-access-7pf7t\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.074884 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.824057 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29975a61-a787-4a38-a109-3e7cae8ed917" path="/var/lib/kubelet/pods/29975a61-a787-4a38-a109-3e7cae8ed917/volumes" Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.915332 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f31d074a-cf1e-488e-9816-8cc25ab12d7f","Type":"ContainerStarted","Data":"fe8aea1000bbfd343bd55ac54c5284324351328317259b3e9a661bfcab691b03"} Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.915401 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:26:55 crc kubenswrapper[4731]: I1129 07:26:55.924055 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="29975a61-a787-4a38-a109-3e7cae8ed917" podUID="f31d074a-cf1e-488e-9816-8cc25ab12d7f" Nov 29 07:26:56 crc kubenswrapper[4731]: I1129 07:26:56.220778 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 07:26:56 crc kubenswrapper[4731]: I1129 07:26:56.252490 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84cd78f644-7wncn" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.132599 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-749fbbbcf-hcvbs"] Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.140857 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.146877 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.146951 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.147828 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.165003 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-749fbbbcf-hcvbs"] Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.218525 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-public-tls-certs\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.218633 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703b6cb-649d-4744-a400-6b551fe79fc2-run-httpd\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.218660 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-combined-ca-bundle\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.218685 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703b6cb-649d-4744-a400-6b551fe79fc2-log-httpd\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.218721 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2p8\" (UniqueName: \"kubernetes.io/projected/0703b6cb-649d-4744-a400-6b551fe79fc2-kube-api-access-qt2p8\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.218834 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0703b6cb-649d-4744-a400-6b551fe79fc2-etc-swift\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.218856 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-config-data\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.218962 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-internal-tls-certs\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.321521 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-internal-tls-certs\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.321690 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-public-tls-certs\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.321746 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703b6cb-649d-4744-a400-6b551fe79fc2-run-httpd\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.321771 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-combined-ca-bundle\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.321796 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703b6cb-649d-4744-a400-6b551fe79fc2-log-httpd\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.321829 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2p8\" (UniqueName: \"kubernetes.io/projected/0703b6cb-649d-4744-a400-6b551fe79fc2-kube-api-access-qt2p8\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.321859 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0703b6cb-649d-4744-a400-6b551fe79fc2-etc-swift\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.321884 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-config-data\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.324878 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703b6cb-649d-4744-a400-6b551fe79fc2-run-httpd\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.325421 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703b6cb-649d-4744-a400-6b551fe79fc2-log-httpd\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.341921 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-public-tls-certs\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.342597 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-combined-ca-bundle\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.343091 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0703b6cb-649d-4744-a400-6b551fe79fc2-etc-swift\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.346842 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-config-data\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.359539 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0703b6cb-649d-4744-a400-6b551fe79fc2-internal-tls-certs\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.368614 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2p8\" (UniqueName: \"kubernetes.io/projected/0703b6cb-649d-4744-a400-6b551fe79fc2-kube-api-access-qt2p8\") pod \"swift-proxy-749fbbbcf-hcvbs\" (UID: \"0703b6cb-649d-4744-a400-6b551fe79fc2\") " pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:00 crc kubenswrapper[4731]: I1129 07:27:00.464512 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:01 crc kubenswrapper[4731]: I1129 07:27:01.531712 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 07:27:02 crc kubenswrapper[4731]: I1129 07:27:02.704518 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:27:03 crc kubenswrapper[4731]: I1129 07:27:03.003116 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:27:03 crc kubenswrapper[4731]: I1129 07:27:03.003202 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:27:03 crc kubenswrapper[4731]: I1129 07:27:03.572733 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:03 crc kubenswrapper[4731]: I1129 07:27:03.573506 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="ceilometer-central-agent" containerID="cri-o://e91a79e3be372608d66f3d15b49b5bc742d17829ae0c02924fc76c5a8bcb4bc0" gracePeriod=30 Nov 29 07:27:03 crc kubenswrapper[4731]: I1129 07:27:03.573610 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="proxy-httpd" containerID="cri-o://98d5fb68ac4f81058db1b8fb8c7d28dbece676e4f45ba00fbb4d44e5acc8f9e4" gracePeriod=30 Nov 29 07:27:03 crc kubenswrapper[4731]: I1129 07:27:03.573872 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="sg-core" containerID="cri-o://1a60dda543bef3c02c14d1a2355f7d2d9cdca5a86d7a91b25c100b231d4256d7" gracePeriod=30 Nov 29 07:27:03 crc kubenswrapper[4731]: I1129 07:27:03.575872 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="ceilometer-notification-agent" containerID="cri-o://b0a52e0399e92e5901d134016b38e43a3062ae712d3ebd55795179617041415e" gracePeriod=30 Nov 29 07:27:03 crc kubenswrapper[4731]: I1129 07:27:03.588704 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:27:04 crc kubenswrapper[4731]: I1129 07:27:04.030410 4731 generic.go:334] "Generic (PLEG): container finished" podID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerID="98d5fb68ac4f81058db1b8fb8c7d28dbece676e4f45ba00fbb4d44e5acc8f9e4" exitCode=0 Nov 29 07:27:04 crc kubenswrapper[4731]: I1129 07:27:04.030968 4731 generic.go:334] "Generic (PLEG): container finished" podID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerID="1a60dda543bef3c02c14d1a2355f7d2d9cdca5a86d7a91b25c100b231d4256d7" exitCode=2 Nov 29 07:27:04 crc kubenswrapper[4731]: I1129 07:27:04.030528 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerDied","Data":"98d5fb68ac4f81058db1b8fb8c7d28dbece676e4f45ba00fbb4d44e5acc8f9e4"} Nov 29 07:27:04 crc kubenswrapper[4731]: I1129 07:27:04.031029 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerDied","Data":"1a60dda543bef3c02c14d1a2355f7d2d9cdca5a86d7a91b25c100b231d4256d7"} Nov 29 07:27:05 crc kubenswrapper[4731]: I1129 07:27:05.046426 4731 generic.go:334] "Generic (PLEG): container finished" podID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerID="e91a79e3be372608d66f3d15b49b5bc742d17829ae0c02924fc76c5a8bcb4bc0" exitCode=0 Nov 29 07:27:05 crc kubenswrapper[4731]: I1129 07:27:05.046487 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerDied","Data":"e91a79e3be372608d66f3d15b49b5bc742d17829ae0c02924fc76c5a8bcb4bc0"} Nov 29 07:27:06 crc kubenswrapper[4731]: I1129 07:27:06.251488 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84cd78f644-7wncn" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 29 07:27:06 crc kubenswrapper[4731]: I1129 07:27:06.252164 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:27:09 crc kubenswrapper[4731]: I1129 07:27:09.097686 4731 generic.go:334] "Generic (PLEG): container finished" podID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerID="b0a52e0399e92e5901d134016b38e43a3062ae712d3ebd55795179617041415e" exitCode=0 Nov 29 07:27:09 crc kubenswrapper[4731]: I1129 07:27:09.098225 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerDied","Data":"b0a52e0399e92e5901d134016b38e43a3062ae712d3ebd55795179617041415e"} Nov 29 07:27:09 crc kubenswrapper[4731]: E1129 07:27:09.914674 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Nov 29 07:27:09 crc kubenswrapper[4731]: E1129 07:27:09.915679 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59ch6fh56bh647hcdh669h5c4h5d6h64ch64h5dh669h557h548h5chfch67chb4h577hf8h576h656h84h5bch5d7h9fh57fh9hf9h546h57bh567q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(f31d074a-cf1e-488e-9816-8cc25ab12d7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:27:09 crc kubenswrapper[4731]: E1129 07:27:09.917139 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="f31d074a-cf1e-488e-9816-8cc25ab12d7f" Nov 29 07:27:09 crc kubenswrapper[4731]: I1129 07:27:09.973464 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-749fbbbcf-hcvbs"] Nov 29 07:27:09 crc kubenswrapper[4731]: I1129 07:27:09.982168 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.069846 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-log-httpd\") pod \"ad9b3a1d-2698-405e-b94a-45d96efd0400\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.070317 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-sg-core-conf-yaml\") pod \"ad9b3a1d-2698-405e-b94a-45d96efd0400\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.070528 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-scripts\") pod \"ad9b3a1d-2698-405e-b94a-45d96efd0400\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.070727 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6px9\" (UniqueName: \"kubernetes.io/projected/ad9b3a1d-2698-405e-b94a-45d96efd0400-kube-api-access-v6px9\") pod \"ad9b3a1d-2698-405e-b94a-45d96efd0400\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.070793 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad9b3a1d-2698-405e-b94a-45d96efd0400" (UID: "ad9b3a1d-2698-405e-b94a-45d96efd0400"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.070938 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-config-data\") pod \"ad9b3a1d-2698-405e-b94a-45d96efd0400\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.071687 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-run-httpd\") pod \"ad9b3a1d-2698-405e-b94a-45d96efd0400\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.071837 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-combined-ca-bundle\") pod \"ad9b3a1d-2698-405e-b94a-45d96efd0400\" (UID: \"ad9b3a1d-2698-405e-b94a-45d96efd0400\") " Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.072582 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.085088 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-scripts" (OuterVolumeSpecName: "scripts") pod "ad9b3a1d-2698-405e-b94a-45d96efd0400" (UID: "ad9b3a1d-2698-405e-b94a-45d96efd0400"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.085454 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad9b3a1d-2698-405e-b94a-45d96efd0400" (UID: "ad9b3a1d-2698-405e-b94a-45d96efd0400"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.100897 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9b3a1d-2698-405e-b94a-45d96efd0400-kube-api-access-v6px9" (OuterVolumeSpecName: "kube-api-access-v6px9") pod "ad9b3a1d-2698-405e-b94a-45d96efd0400" (UID: "ad9b3a1d-2698-405e-b94a-45d96efd0400"). InnerVolumeSpecName "kube-api-access-v6px9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.105988 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad9b3a1d-2698-405e-b94a-45d96efd0400" (UID: "ad9b3a1d-2698-405e-b94a-45d96efd0400"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.140349 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-558fbdd7b9-2w7vs" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.143252 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad9b3a1d-2698-405e-b94a-45d96efd0400","Type":"ContainerDied","Data":"2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511"} Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.143354 4731 scope.go:117] "RemoveContainer" containerID="98d5fb68ac4f81058db1b8fb8c7d28dbece676e4f45ba00fbb4d44e5acc8f9e4" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.143738 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.155334 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-749fbbbcf-hcvbs" event={"ID":"0703b6cb-649d-4744-a400-6b551fe79fc2","Type":"ContainerStarted","Data":"4cea91184a2d1ee91f07b346898b13dee0b40be29a8edd3d5673f5d45ebb0b4e"} Nov 29 07:27:10 crc kubenswrapper[4731]: E1129 07:27:10.157815 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="f31d074a-cf1e-488e-9816-8cc25ab12d7f" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.177870 4731 scope.go:117] "RemoveContainer" containerID="1a60dda543bef3c02c14d1a2355f7d2d9cdca5a86d7a91b25c100b231d4256d7" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.189741 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.189798 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.189817 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6px9\" (UniqueName: \"kubernetes.io/projected/ad9b3a1d-2698-405e-b94a-45d96efd0400-kube-api-access-v6px9\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.189836 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad9b3a1d-2698-405e-b94a-45d96efd0400-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.272170 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66b9c88964-2rnsc"] Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.272535 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66b9c88964-2rnsc" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerName="neutron-api" containerID="cri-o://992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758" gracePeriod=30 Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.273899 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66b9c88964-2rnsc" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerName="neutron-httpd" containerID="cri-o://a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e" gracePeriod=30 Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.303700 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad9b3a1d-2698-405e-b94a-45d96efd0400" (UID: "ad9b3a1d-2698-405e-b94a-45d96efd0400"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.327000 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-config-data" (OuterVolumeSpecName: "config-data") pod "ad9b3a1d-2698-405e-b94a-45d96efd0400" (UID: "ad9b3a1d-2698-405e-b94a-45d96efd0400"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.396785 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.396830 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9b3a1d-2698-405e-b94a-45d96efd0400-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.438627 4731 scope.go:117] "RemoveContainer" containerID="b0a52e0399e92e5901d134016b38e43a3062ae712d3ebd55795179617041415e" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.467936 4731 scope.go:117] "RemoveContainer" containerID="e91a79e3be372608d66f3d15b49b5bc742d17829ae0c02924fc76c5a8bcb4bc0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.508476 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.519007 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.545432 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:10 crc kubenswrapper[4731]: E1129 07:27:10.546344 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="ceilometer-notification-agent" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.546388 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="ceilometer-notification-agent" Nov 29 07:27:10 crc kubenswrapper[4731]: E1129 07:27:10.546442 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="sg-core" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.546454 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="sg-core" Nov 29 07:27:10 crc kubenswrapper[4731]: E1129 07:27:10.546474 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="proxy-httpd" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.546483 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="proxy-httpd" Nov 29 07:27:10 crc kubenswrapper[4731]: E1129 07:27:10.546524 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="ceilometer-central-agent" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.546634 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="ceilometer-central-agent" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.552812 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="proxy-httpd" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.552851 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="ceilometer-central-agent" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.552861 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="ceilometer-notification-agent" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.552882 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" containerName="sg-core" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.555740 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.563591 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.564108 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.604474 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.705108 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.705206 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7f89\" (UniqueName: \"kubernetes.io/projected/e37ff080-fbd2-454b-861d-3660c1c17130-kube-api-access-g7f89\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.705245 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-config-data\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.705286 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-run-httpd\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.705487 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.705758 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-scripts\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.705826 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-log-httpd\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.807967 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.808064 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7f89\" (UniqueName: \"kubernetes.io/projected/e37ff080-fbd2-454b-861d-3660c1c17130-kube-api-access-g7f89\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.808097 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-config-data\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.808150 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-run-httpd\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.808177 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.808220 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-scripts\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.808244 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-log-httpd\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.809006 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-run-httpd\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.809031 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-log-httpd\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.814243 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-scripts\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.816084 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-config-data\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.816178 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.816702 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.829451 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7f89\" (UniqueName: \"kubernetes.io/projected/e37ff080-fbd2-454b-861d-3660c1c17130-kube-api-access-g7f89\") pod \"ceilometer-0\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.889034 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.904001 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:10 crc kubenswrapper[4731]: I1129 07:27:10.904352 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7241db7b-fd6e-431a-b38a-6d3f3404a630" containerName="kube-state-metrics" containerID="cri-o://1bcc91bcdbeb0a72ea88a3a4f9801261f1df5f6a980ad1e1d5a2de646d0ce7fb" gracePeriod=30 Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.171099 4731 generic.go:334] "Generic (PLEG): container finished" podID="7241db7b-fd6e-431a-b38a-6d3f3404a630" containerID="1bcc91bcdbeb0a72ea88a3a4f9801261f1df5f6a980ad1e1d5a2de646d0ce7fb" exitCode=2 Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.171357 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7241db7b-fd6e-431a-b38a-6d3f3404a630","Type":"ContainerDied","Data":"1bcc91bcdbeb0a72ea88a3a4f9801261f1df5f6a980ad1e1d5a2de646d0ce7fb"} Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.176265 4731 generic.go:334] "Generic (PLEG): container finished" podID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerID="a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e" exitCode=0 Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.176375 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c88964-2rnsc" event={"ID":"56d6dd27-1657-4460-8dc9-cb18176d395a","Type":"ContainerDied","Data":"a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e"} Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.184399 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-749fbbbcf-hcvbs" event={"ID":"0703b6cb-649d-4744-a400-6b551fe79fc2","Type":"ContainerStarted","Data":"257e3961e2966efb4cb8ba7b5334acfcbbc155b3a28396836871a1bc1a8d0df5"} Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.423033 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.567427 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.730782 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tr7n\" (UniqueName: \"kubernetes.io/projected/7241db7b-fd6e-431a-b38a-6d3f3404a630-kube-api-access-7tr7n\") pod \"7241db7b-fd6e-431a-b38a-6d3f3404a630\" (UID: \"7241db7b-fd6e-431a-b38a-6d3f3404a630\") " Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.750931 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7241db7b-fd6e-431a-b38a-6d3f3404a630-kube-api-access-7tr7n" (OuterVolumeSpecName: "kube-api-access-7tr7n") pod "7241db7b-fd6e-431a-b38a-6d3f3404a630" (UID: "7241db7b-fd6e-431a-b38a-6d3f3404a630"). InnerVolumeSpecName "kube-api-access-7tr7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.833816 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad9b3a1d-2698-405e-b94a-45d96efd0400" path="/var/lib/kubelet/pods/ad9b3a1d-2698-405e-b94a-45d96efd0400/volumes" Nov 29 07:27:11 crc kubenswrapper[4731]: I1129 07:27:11.834668 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tr7n\" (UniqueName: \"kubernetes.io/projected/7241db7b-fd6e-431a-b38a-6d3f3404a630-kube-api-access-7tr7n\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.229206 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7241db7b-fd6e-431a-b38a-6d3f3404a630","Type":"ContainerDied","Data":"b5bc2f8a248446e98868df724ccda0d65f94684f8d4a410d285d4c40aed1da1b"} Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.229282 4731 scope.go:117] "RemoveContainer" containerID="1bcc91bcdbeb0a72ea88a3a4f9801261f1df5f6a980ad1e1d5a2de646d0ce7fb" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.229442 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.233883 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerStarted","Data":"4b116a464374350b636d74e2aeb044aad93a3a77c50ee957741155072e2c8b6e"} Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.250008 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-749fbbbcf-hcvbs" event={"ID":"0703b6cb-649d-4744-a400-6b551fe79fc2","Type":"ContainerStarted","Data":"a6c0d138936f9fd76928d96add364a90a635d26e7c0b3dc2728cdcce53728830"} Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.250946 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.251081 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.294795 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-749fbbbcf-hcvbs" podStartSLOduration=12.294754507 podStartE2EDuration="12.294754507s" podCreationTimestamp="2025-11-29 07:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:12.276277236 +0000 UTC m=+1271.166638349" watchObservedRunningTime="2025-11-29 07:27:12.294754507 +0000 UTC m=+1271.185115610" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.396026 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.407084 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.419593 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:12 crc kubenswrapper[4731]: E1129 07:27:12.420088 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7241db7b-fd6e-431a-b38a-6d3f3404a630" containerName="kube-state-metrics" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.420104 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7241db7b-fd6e-431a-b38a-6d3f3404a630" containerName="kube-state-metrics" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.420309 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7241db7b-fd6e-431a-b38a-6d3f3404a630" containerName="kube-state-metrics" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.421027 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.426243 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.429324 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.482497 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.564364 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.564423 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.564469 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6l2c\" (UniqueName: \"kubernetes.io/projected/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-api-access-c6l2c\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.564491 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.666789 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.666868 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.666971 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6l2c\" (UniqueName: \"kubernetes.io/projected/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-api-access-c6l2c\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.667014 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.682282 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.682693 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.688358 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.690170 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6l2c\" (UniqueName: \"kubernetes.io/projected/67ebcb38-078b-4f76-b700-e77cb1525f7d-kube-api-access-c6l2c\") pod \"kube-state-metrics-0\" (UID: \"67ebcb38-078b-4f76-b700-e77cb1525f7d\") " pod="openstack/kube-state-metrics-0" Nov 29 07:27:12 crc kubenswrapper[4731]: I1129 07:27:12.765005 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:27:13 crc kubenswrapper[4731]: I1129 07:27:13.290329 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerStarted","Data":"ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0"} Nov 29 07:27:13 crc kubenswrapper[4731]: I1129 07:27:13.355938 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:27:13 crc kubenswrapper[4731]: I1129 07:27:13.826739 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7241db7b-fd6e-431a-b38a-6d3f3404a630" path="/var/lib/kubelet/pods/7241db7b-fd6e-431a-b38a-6d3f3404a630/volumes" Nov 29 07:27:13 crc kubenswrapper[4731]: E1129 07:27:13.986327 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad9b3a1d_2698_405e_b94a_45d96efd0400.slice/crio-2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf7f6cfb_9c72_4be9_9177_cd14712e1c1e.slice/crio-27fa026eb4be33e0970601908f2bd67b51eec9bb4bd79b5ad9e662b251422727.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf7f6cfb_9c72_4be9_9177_cd14712e1c1e.slice/crio-conmon-27fa026eb4be33e0970601908f2bd67b51eec9bb4bd79b5ad9e662b251422727.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.269318 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.325479 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerStarted","Data":"f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019"} Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.333790 4731 generic.go:334] "Generic (PLEG): container finished" podID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerID="27fa026eb4be33e0970601908f2bd67b51eec9bb4bd79b5ad9e662b251422727" exitCode=137 Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.333897 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84cd78f644-7wncn" event={"ID":"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e","Type":"ContainerDied","Data":"27fa026eb4be33e0970601908f2bd67b51eec9bb4bd79b5ad9e662b251422727"} Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.336101 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67ebcb38-078b-4f76-b700-e77cb1525f7d","Type":"ContainerStarted","Data":"d1118dfac72134bf23f3fa51d9adc1d2765221271a6ebf9875459605216e07f6"} Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.817887 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.948001 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-scripts\") pod \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.948804 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rkvw\" (UniqueName: \"kubernetes.io/projected/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-kube-api-access-5rkvw\") pod \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.948854 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-logs\") pod \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.949017 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-combined-ca-bundle\") pod \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.949070 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-secret-key\") pod \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.949259 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-config-data\") pod \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.949370 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-tls-certs\") pod \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\" (UID: \"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e\") " Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.951075 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-logs" (OuterVolumeSpecName: "logs") pod "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" (UID: "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.955052 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-kube-api-access-5rkvw" (OuterVolumeSpecName: "kube-api-access-5rkvw") pod "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" (UID: "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e"). InnerVolumeSpecName "kube-api-access-5rkvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.968955 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" (UID: "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:14 crc kubenswrapper[4731]: I1129 07:27:14.982601 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-scripts" (OuterVolumeSpecName: "scripts") pod "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" (UID: "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.003477 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-config-data" (OuterVolumeSpecName: "config-data") pod "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" (UID: "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.025645 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" (UID: "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.036308 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" (UID: "bf7f6cfb-9c72-4be9-9177-cd14712e1c1e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.052249 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.052308 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rkvw\" (UniqueName: \"kubernetes.io/projected/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-kube-api-access-5rkvw\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.052324 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.052337 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.052348 4731 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.052358 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.052418 4731 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.354600 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84cd78f644-7wncn" event={"ID":"bf7f6cfb-9c72-4be9-9177-cd14712e1c1e","Type":"ContainerDied","Data":"6d0154573933657cb4ec63ae6bf40de2bdbbf019f897dba5c1fbf5b23f956123"} Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.354699 4731 scope.go:117] "RemoveContainer" containerID="233af86133f61c225ab9848a8308c125fc186329b7b7974a653e06432e81629a" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.354697 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84cd78f644-7wncn" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.397332 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67ebcb38-078b-4f76-b700-e77cb1525f7d","Type":"ContainerStarted","Data":"f1937f8ed9afec72b5bee2510b355c77a7df76f429e96a28babc3039308bfedc"} Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.399437 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.449441 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerStarted","Data":"3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23"} Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.513769 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84cd78f644-7wncn"] Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.545784 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84cd78f644-7wncn"] Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.550162 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.4600897809999998 podStartE2EDuration="3.55012388s" podCreationTimestamp="2025-11-29 07:27:12 +0000 UTC" firstStartedPulling="2025-11-29 07:27:13.362059802 +0000 UTC m=+1272.252420905" lastFinishedPulling="2025-11-29 07:27:14.452093821 +0000 UTC m=+1273.342455004" observedRunningTime="2025-11-29 07:27:15.482440463 +0000 UTC m=+1274.372801566" watchObservedRunningTime="2025-11-29 07:27:15.55012388 +0000 UTC m=+1274.440484983" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.638636 4731 scope.go:117] "RemoveContainer" containerID="27fa026eb4be33e0970601908f2bd67b51eec9bb4bd79b5ad9e662b251422727" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.657076 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:15 crc kubenswrapper[4731]: I1129 07:27:15.822055 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" path="/var/lib/kubelet/pods/bf7f6cfb-9c72-4be9-9177-cd14712e1c1e/volumes" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.084661 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.148465 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-config\") pod \"56d6dd27-1657-4460-8dc9-cb18176d395a\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.152657 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-combined-ca-bundle\") pod \"56d6dd27-1657-4460-8dc9-cb18176d395a\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.152844 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-httpd-config\") pod \"56d6dd27-1657-4460-8dc9-cb18176d395a\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.152953 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-ovndb-tls-certs\") pod \"56d6dd27-1657-4460-8dc9-cb18176d395a\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.153132 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swbs6\" (UniqueName: \"kubernetes.io/projected/56d6dd27-1657-4460-8dc9-cb18176d395a-kube-api-access-swbs6\") pod \"56d6dd27-1657-4460-8dc9-cb18176d395a\" (UID: \"56d6dd27-1657-4460-8dc9-cb18176d395a\") " Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.162929 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "56d6dd27-1657-4460-8dc9-cb18176d395a" (UID: "56d6dd27-1657-4460-8dc9-cb18176d395a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.166140 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d6dd27-1657-4460-8dc9-cb18176d395a-kube-api-access-swbs6" (OuterVolumeSpecName: "kube-api-access-swbs6") pod "56d6dd27-1657-4460-8dc9-cb18176d395a" (UID: "56d6dd27-1657-4460-8dc9-cb18176d395a"). InnerVolumeSpecName "kube-api-access-swbs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.228980 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56d6dd27-1657-4460-8dc9-cb18176d395a" (UID: "56d6dd27-1657-4460-8dc9-cb18176d395a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.241786 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-config" (OuterVolumeSpecName: "config") pod "56d6dd27-1657-4460-8dc9-cb18176d395a" (UID: "56d6dd27-1657-4460-8dc9-cb18176d395a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.256556 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swbs6\" (UniqueName: \"kubernetes.io/projected/56d6dd27-1657-4460-8dc9-cb18176d395a-kube-api-access-swbs6\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.256631 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.256644 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.256654 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.289718 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "56d6dd27-1657-4460-8dc9-cb18176d395a" (UID: "56d6dd27-1657-4460-8dc9-cb18176d395a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.360596 4731 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56d6dd27-1657-4460-8dc9-cb18176d395a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.486476 4731 generic.go:334] "Generic (PLEG): container finished" podID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerID="992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758" exitCode=0 Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.486600 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c88964-2rnsc" event={"ID":"56d6dd27-1657-4460-8dc9-cb18176d395a","Type":"ContainerDied","Data":"992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758"} Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.486650 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c88964-2rnsc" event={"ID":"56d6dd27-1657-4460-8dc9-cb18176d395a","Type":"ContainerDied","Data":"e5c29a65c39deb7fe8edf06fa90a5502017c276a2423f914116a937ee5b89306"} Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.486675 4731 scope.go:117] "RemoveContainer" containerID="a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.486887 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b9c88964-2rnsc" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.502849 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerStarted","Data":"332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9"} Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.503416 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="ceilometer-central-agent" containerID="cri-o://ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0" gracePeriod=30 Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.503532 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.503611 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="proxy-httpd" containerID="cri-o://332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9" gracePeriod=30 Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.503666 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="sg-core" containerID="cri-o://3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23" gracePeriod=30 Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.503718 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="ceilometer-notification-agent" containerID="cri-o://f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019" gracePeriod=30 Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.538312 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66b9c88964-2rnsc"] Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.563927 4731 scope.go:117] "RemoveContainer" containerID="992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.585103 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66b9c88964-2rnsc"] Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.596168 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.389484702 podStartE2EDuration="7.596129551s" podCreationTimestamp="2025-11-29 07:27:10 +0000 UTC" firstStartedPulling="2025-11-29 07:27:11.439206902 +0000 UTC m=+1270.329568005" lastFinishedPulling="2025-11-29 07:27:16.645851741 +0000 UTC m=+1275.536212854" observedRunningTime="2025-11-29 07:27:17.550396906 +0000 UTC m=+1276.440758009" watchObservedRunningTime="2025-11-29 07:27:17.596129551 +0000 UTC m=+1276.486490654" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.634315 4731 scope.go:117] "RemoveContainer" containerID="a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e" Nov 29 07:27:17 crc kubenswrapper[4731]: E1129 07:27:17.635416 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e\": container with ID starting with a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e not found: ID does not exist" containerID="a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.635480 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e"} err="failed to get container status \"a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e\": rpc error: code = NotFound desc = could not find container \"a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e\": container with ID starting with a1a21951772ff613daba14cce21966185304d82830e89f9c511d4b48f162f49e not found: ID does not exist" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.635521 4731 scope.go:117] "RemoveContainer" containerID="992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758" Nov 29 07:27:17 crc kubenswrapper[4731]: E1129 07:27:17.636178 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758\": container with ID starting with 992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758 not found: ID does not exist" containerID="992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.636209 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758"} err="failed to get container status \"992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758\": rpc error: code = NotFound desc = could not find container \"992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758\": container with ID starting with 992f52c28773ba224397bb0cb0d37eebcfaafc80c0523017016b379618463758 not found: ID does not exist" Nov 29 07:27:17 crc kubenswrapper[4731]: I1129 07:27:17.821125 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" path="/var/lib/kubelet/pods/56d6dd27-1657-4460-8dc9-cb18176d395a/volumes" Nov 29 07:27:18 crc kubenswrapper[4731]: I1129 07:27:18.516316 4731 generic.go:334] "Generic (PLEG): container finished" podID="e37ff080-fbd2-454b-861d-3660c1c17130" containerID="332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9" exitCode=0 Nov 29 07:27:18 crc kubenswrapper[4731]: I1129 07:27:18.516387 4731 generic.go:334] "Generic (PLEG): container finished" podID="e37ff080-fbd2-454b-861d-3660c1c17130" containerID="3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23" exitCode=2 Nov 29 07:27:18 crc kubenswrapper[4731]: I1129 07:27:18.516377 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerDied","Data":"332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9"} Nov 29 07:27:18 crc kubenswrapper[4731]: I1129 07:27:18.516463 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerDied","Data":"3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23"} Nov 29 07:27:18 crc kubenswrapper[4731]: I1129 07:27:18.516483 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerDied","Data":"f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019"} Nov 29 07:27:18 crc kubenswrapper[4731]: I1129 07:27:18.516398 4731 generic.go:334] "Generic (PLEG): container finished" podID="e37ff080-fbd2-454b-861d-3660c1c17130" containerID="f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019" exitCode=0 Nov 29 07:27:20 crc kubenswrapper[4731]: I1129 07:27:20.474412 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-749fbbbcf-hcvbs" Nov 29 07:27:22 crc kubenswrapper[4731]: I1129 07:27:22.780893 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 07:27:24 crc kubenswrapper[4731]: E1129 07:27:24.251918 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad9b3a1d_2698_405e_b94a_45d96efd0400.slice/crio-2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511\": RecentStats: unable to find data in memory cache]" Nov 29 07:27:25 crc kubenswrapper[4731]: I1129 07:27:25.506175 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:27:25 crc kubenswrapper[4731]: I1129 07:27:25.506845 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerName="glance-log" containerID="cri-o://0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413" gracePeriod=30 Nov 29 07:27:25 crc kubenswrapper[4731]: I1129 07:27:25.506963 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerName="glance-httpd" containerID="cri-o://00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b" gracePeriod=30 Nov 29 07:27:26 crc kubenswrapper[4731]: I1129 07:27:26.629460 4731 generic.go:334] "Generic (PLEG): container finished" podID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerID="0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413" exitCode=143 Nov 29 07:27:26 crc kubenswrapper[4731]: I1129 07:27:26.629528 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6b502e2-80f2-44f7-9665-3666c7a7c56b","Type":"ContainerDied","Data":"0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413"} Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.274712 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.320069 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-sg-core-conf-yaml\") pod \"e37ff080-fbd2-454b-861d-3660c1c17130\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.320215 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7f89\" (UniqueName: \"kubernetes.io/projected/e37ff080-fbd2-454b-861d-3660c1c17130-kube-api-access-g7f89\") pod \"e37ff080-fbd2-454b-861d-3660c1c17130\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.320287 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-combined-ca-bundle\") pod \"e37ff080-fbd2-454b-861d-3660c1c17130\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.320408 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-config-data\") pod \"e37ff080-fbd2-454b-861d-3660c1c17130\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.320481 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-scripts\") pod \"e37ff080-fbd2-454b-861d-3660c1c17130\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.320658 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-run-httpd\") pod \"e37ff080-fbd2-454b-861d-3660c1c17130\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.320701 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-log-httpd\") pod \"e37ff080-fbd2-454b-861d-3660c1c17130\" (UID: \"e37ff080-fbd2-454b-861d-3660c1c17130\") " Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.321933 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e37ff080-fbd2-454b-861d-3660c1c17130" (UID: "e37ff080-fbd2-454b-861d-3660c1c17130"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.327568 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e37ff080-fbd2-454b-861d-3660c1c17130" (UID: "e37ff080-fbd2-454b-861d-3660c1c17130"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.330863 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-scripts" (OuterVolumeSpecName: "scripts") pod "e37ff080-fbd2-454b-861d-3660c1c17130" (UID: "e37ff080-fbd2-454b-861d-3660c1c17130"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.333095 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37ff080-fbd2-454b-861d-3660c1c17130-kube-api-access-g7f89" (OuterVolumeSpecName: "kube-api-access-g7f89") pod "e37ff080-fbd2-454b-861d-3660c1c17130" (UID: "e37ff080-fbd2-454b-861d-3660c1c17130"). InnerVolumeSpecName "kube-api-access-g7f89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.364269 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e37ff080-fbd2-454b-861d-3660c1c17130" (UID: "e37ff080-fbd2-454b-861d-3660c1c17130"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.424204 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.424248 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.424264 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e37ff080-fbd2-454b-861d-3660c1c17130-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.424277 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.424293 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7f89\" (UniqueName: \"kubernetes.io/projected/e37ff080-fbd2-454b-861d-3660c1c17130-kube-api-access-g7f89\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.425269 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e37ff080-fbd2-454b-861d-3660c1c17130" (UID: "e37ff080-fbd2-454b-861d-3660c1c17130"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.445693 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-config-data" (OuterVolumeSpecName: "config-data") pod "e37ff080-fbd2-454b-861d-3660c1c17130" (UID: "e37ff080-fbd2-454b-861d-3660c1c17130"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.526668 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.526734 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37ff080-fbd2-454b-861d-3660c1c17130-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.644632 4731 generic.go:334] "Generic (PLEG): container finished" podID="e37ff080-fbd2-454b-861d-3660c1c17130" containerID="ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0" exitCode=0 Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.644742 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerDied","Data":"ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0"} Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.644813 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e37ff080-fbd2-454b-861d-3660c1c17130","Type":"ContainerDied","Data":"4b116a464374350b636d74e2aeb044aad93a3a77c50ee957741155072e2c8b6e"} Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.644845 4731 scope.go:117] "RemoveContainer" containerID="332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.646215 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.648293 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f31d074a-cf1e-488e-9816-8cc25ab12d7f","Type":"ContainerStarted","Data":"e6b60b08e77ef89f7e652c7add28866018c0543d3d77ffcdcad45ff1a34325ab"} Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.669615 4731 scope.go:117] "RemoveContainer" containerID="3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.688224 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.022362982 podStartE2EDuration="33.688193902s" podCreationTimestamp="2025-11-29 07:26:54 +0000 UTC" firstStartedPulling="2025-11-29 07:26:55.093908251 +0000 UTC m=+1253.984269354" lastFinishedPulling="2025-11-29 07:27:26.759739171 +0000 UTC m=+1285.650100274" observedRunningTime="2025-11-29 07:27:27.681801528 +0000 UTC m=+1286.572162631" watchObservedRunningTime="2025-11-29 07:27:27.688193902 +0000 UTC m=+1286.578555005" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.708386 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.714722 4731 scope.go:117] "RemoveContainer" containerID="f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.733777 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.753469 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.754240 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon-log" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754270 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon-log" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.754298 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="ceilometer-notification-agent" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754307 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="ceilometer-notification-agent" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.754322 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="sg-core" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754331 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="sg-core" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.754342 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerName="neutron-api" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754352 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerName="neutron-api" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.754371 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="proxy-httpd" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754378 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="proxy-httpd" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.754394 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="ceilometer-central-agent" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754402 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="ceilometer-central-agent" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.754426 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754435 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.754451 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerName="neutron-httpd" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754459 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerName="neutron-httpd" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754772 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="ceilometer-central-agent" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754786 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="ceilometer-notification-agent" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754806 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon-log" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754821 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerName="neutron-api" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754835 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="proxy-httpd" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754848 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d6dd27-1657-4460-8dc9-cb18176d395a" containerName="neutron-httpd" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754866 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7f6cfb-9c72-4be9-9177-cd14712e1c1e" containerName="horizon" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.754878 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" containerName="sg-core" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.757548 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.762070 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.762235 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.762462 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.762673 4731 scope.go:117] "RemoveContainer" containerID="ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.767740 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.824107 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37ff080-fbd2-454b-861d-3660c1c17130" path="/var/lib/kubelet/pods/e37ff080-fbd2-454b-861d-3660c1c17130/volumes" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.845552 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.845642 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-log-httpd\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.845735 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-config-data\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.845792 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-scripts\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.845854 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.846036 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdwfw\" (UniqueName: \"kubernetes.io/projected/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-kube-api-access-gdwfw\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.846181 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.846359 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-run-httpd\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.857266 4731 scope.go:117] "RemoveContainer" containerID="332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.857945 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9\": container with ID starting with 332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9 not found: ID does not exist" containerID="332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.857988 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9"} err="failed to get container status \"332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9\": rpc error: code = NotFound desc = could not find container \"332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9\": container with ID starting with 332d292e1420f2a40433920316bc72b3599543e5d0392028e6c7c83dc39431e9 not found: ID does not exist" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.858022 4731 scope.go:117] "RemoveContainer" containerID="3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.858317 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23\": container with ID starting with 3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23 not found: ID does not exist" containerID="3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.858351 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23"} err="failed to get container status \"3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23\": rpc error: code = NotFound desc = could not find container \"3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23\": container with ID starting with 3e7e0c38f41ae61e100109dfcf1a22d7083dc92906da1c48fdcb7b7b66f57f23 not found: ID does not exist" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.858374 4731 scope.go:117] "RemoveContainer" containerID="f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.858637 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019\": container with ID starting with f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019 not found: ID does not exist" containerID="f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.858665 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019"} err="failed to get container status \"f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019\": rpc error: code = NotFound desc = could not find container \"f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019\": container with ID starting with f373f65f0c4b1c5e6c342bce1b3bf2d36ac2fdc543d54d8a56d63ea802f84019 not found: ID does not exist" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.858692 4731 scope.go:117] "RemoveContainer" containerID="ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0" Nov 29 07:27:27 crc kubenswrapper[4731]: E1129 07:27:27.858909 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0\": container with ID starting with ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0 not found: ID does not exist" containerID="ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.858938 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0"} err="failed to get container status \"ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0\": rpc error: code = NotFound desc = could not find container \"ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0\": container with ID starting with ab33daf7c32c756649a32df571a2d289b6af0679393eccc813963a562f2b1af0 not found: ID does not exist" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.948502 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdwfw\" (UniqueName: \"kubernetes.io/projected/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-kube-api-access-gdwfw\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.948557 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.948644 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-run-httpd\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.948722 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.948754 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-log-httpd\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.948791 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-config-data\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.948833 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-scripts\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.948879 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.951050 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-run-httpd\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.951093 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-log-httpd\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.959072 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.960962 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-scripts\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.965811 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.968011 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.972302 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-config-data\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:27 crc kubenswrapper[4731]: I1129 07:27:27.974961 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdwfw\" (UniqueName: \"kubernetes.io/projected/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-kube-api-access-gdwfw\") pod \"ceilometer-0\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " pod="openstack/ceilometer-0" Nov 29 07:27:28 crc kubenswrapper[4731]: I1129 07:27:28.124042 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:27:28 crc kubenswrapper[4731]: I1129 07:27:28.124450 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerName="glance-log" containerID="cri-o://e18b392c7a703626b4da6b904a8ebfbf34639349dcf46fbb7f5eed20217b8fc0" gracePeriod=30 Nov 29 07:27:28 crc kubenswrapper[4731]: I1129 07:27:28.124719 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerName="glance-httpd" containerID="cri-o://f28c3b1ecdc62f22eb2d42b0bcea85656e456557d607ef2880b499bae1d325ee" gracePeriod=30 Nov 29 07:27:28 crc kubenswrapper[4731]: I1129 07:27:28.149804 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:28 crc kubenswrapper[4731]: I1129 07:27:28.663627 4731 generic.go:334] "Generic (PLEG): container finished" podID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerID="e18b392c7a703626b4da6b904a8ebfbf34639349dcf46fbb7f5eed20217b8fc0" exitCode=143 Nov 29 07:27:28 crc kubenswrapper[4731]: I1129 07:27:28.663987 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4a309b6-ff09-434a-8d65-9dd888a25dab","Type":"ContainerDied","Data":"e18b392c7a703626b4da6b904a8ebfbf34639349dcf46fbb7f5eed20217b8fc0"} Nov 29 07:27:28 crc kubenswrapper[4731]: I1129 07:27:28.716330 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.247896 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.282681 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp82v\" (UniqueName: \"kubernetes.io/projected/f6b502e2-80f2-44f7-9665-3666c7a7c56b-kube-api-access-vp82v\") pod \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.282838 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.282906 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-combined-ca-bundle\") pod \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.282947 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-httpd-run\") pod \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.283026 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-config-data\") pod \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.283107 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-scripts\") pod \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.283140 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-logs\") pod \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.283358 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f6b502e2-80f2-44f7-9665-3666c7a7c56b" (UID: "f6b502e2-80f2-44f7-9665-3666c7a7c56b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.283680 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-logs" (OuterVolumeSpecName: "logs") pod "f6b502e2-80f2-44f7-9665-3666c7a7c56b" (UID: "f6b502e2-80f2-44f7-9665-3666c7a7c56b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.284129 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-public-tls-certs\") pod \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\" (UID: \"f6b502e2-80f2-44f7-9665-3666c7a7c56b\") " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.284795 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.284819 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b502e2-80f2-44f7-9665-3666c7a7c56b-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.314503 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "f6b502e2-80f2-44f7-9665-3666c7a7c56b" (UID: "f6b502e2-80f2-44f7-9665-3666c7a7c56b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.327854 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b502e2-80f2-44f7-9665-3666c7a7c56b-kube-api-access-vp82v" (OuterVolumeSpecName: "kube-api-access-vp82v") pod "f6b502e2-80f2-44f7-9665-3666c7a7c56b" (UID: "f6b502e2-80f2-44f7-9665-3666c7a7c56b"). InnerVolumeSpecName "kube-api-access-vp82v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.327994 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-scripts" (OuterVolumeSpecName: "scripts") pod "f6b502e2-80f2-44f7-9665-3666c7a7c56b" (UID: "f6b502e2-80f2-44f7-9665-3666c7a7c56b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.338942 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6b502e2-80f2-44f7-9665-3666c7a7c56b" (UID: "f6b502e2-80f2-44f7-9665-3666c7a7c56b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.374144 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f6b502e2-80f2-44f7-9665-3666c7a7c56b" (UID: "f6b502e2-80f2-44f7-9665-3666c7a7c56b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.378525 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-config-data" (OuterVolumeSpecName: "config-data") pod "f6b502e2-80f2-44f7-9665-3666c7a7c56b" (UID: "f6b502e2-80f2-44f7-9665-3666c7a7c56b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.395768 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.395825 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.395842 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.395851 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.395860 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6b502e2-80f2-44f7-9665-3666c7a7c56b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.395868 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp82v\" (UniqueName: \"kubernetes.io/projected/f6b502e2-80f2-44f7-9665-3666c7a7c56b-kube-api-access-vp82v\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.429761 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.498703 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.681616 4731 generic.go:334] "Generic (PLEG): container finished" podID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerID="00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b" exitCode=0 Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.681731 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.681823 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6b502e2-80f2-44f7-9665-3666c7a7c56b","Type":"ContainerDied","Data":"00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b"} Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.681869 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6b502e2-80f2-44f7-9665-3666c7a7c56b","Type":"ContainerDied","Data":"55deb3523b881efaed4f6f540ac00ab8d143caeec9bc2f252d0e9cfc668a0781"} Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.681909 4731 scope.go:117] "RemoveContainer" containerID="00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.684209 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerStarted","Data":"11940e6f80732a01290a816c7a92bbac94d142d61337ab774af6b9518f8884f2"} Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.711729 4731 scope.go:117] "RemoveContainer" containerID="0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.755272 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.755583 4731 scope.go:117] "RemoveContainer" containerID="00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b" Nov 29 07:27:29 crc kubenswrapper[4731]: E1129 07:27:29.758893 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b\": container with ID starting with 00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b not found: ID does not exist" containerID="00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.758955 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b"} err="failed to get container status \"00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b\": rpc error: code = NotFound desc = could not find container \"00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b\": container with ID starting with 00540843b2a77632ad69629391425b932bbb976084e2c1e17bffe6067d5fff6b not found: ID does not exist" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.758997 4731 scope.go:117] "RemoveContainer" containerID="0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413" Nov 29 07:27:29 crc kubenswrapper[4731]: E1129 07:27:29.759581 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413\": container with ID starting with 0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413 not found: ID does not exist" containerID="0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.759612 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413"} err="failed to get container status \"0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413\": rpc error: code = NotFound desc = could not find container \"0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413\": container with ID starting with 0cd29542eb0cbf38d9f18ea343561f30a931247f90fafa7d3f804d5b6a348413 not found: ID does not exist" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.783449 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.821546 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" path="/var/lib/kubelet/pods/f6b502e2-80f2-44f7-9665-3666c7a7c56b/volumes" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.851203 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:27:29 crc kubenswrapper[4731]: E1129 07:27:29.851822 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerName="glance-httpd" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.851843 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerName="glance-httpd" Nov 29 07:27:29 crc kubenswrapper[4731]: E1129 07:27:29.851860 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerName="glance-log" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.851867 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerName="glance-log" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.852053 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerName="glance-httpd" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.852085 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b502e2-80f2-44f7-9665-3666c7a7c56b" containerName="glance-log" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.853240 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.856270 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.856725 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.878345 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.910284 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.910471 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-config-data\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.910516 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.910565 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-scripts\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.910617 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af63168b-e97f-4284-bdd4-d2547810144c-logs\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.910716 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b246\" (UniqueName: \"kubernetes.io/projected/af63168b-e97f-4284-bdd4-d2547810144c-kube-api-access-7b246\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.910744 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af63168b-e97f-4284-bdd4-d2547810144c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:29 crc kubenswrapper[4731]: I1129 07:27:29.910791 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.013060 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.013354 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-scripts\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.013450 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af63168b-e97f-4284-bdd4-d2547810144c-logs\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.013665 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af63168b-e97f-4284-bdd4-d2547810144c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.013771 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b246\" (UniqueName: \"kubernetes.io/projected/af63168b-e97f-4284-bdd4-d2547810144c-kube-api-access-7b246\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.013851 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.013927 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.014078 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-config-data\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.017046 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af63168b-e97f-4284-bdd4-d2547810144c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.018998 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af63168b-e97f-4284-bdd4-d2547810144c-logs\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.020142 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-config-data\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.020702 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.030483 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.042546 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.062371 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af63168b-e97f-4284-bdd4-d2547810144c-scripts\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.066415 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b246\" (UniqueName: \"kubernetes.io/projected/af63168b-e97f-4284-bdd4-d2547810144c-kube-api-access-7b246\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.075596 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"af63168b-e97f-4284-bdd4-d2547810144c\") " pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.173070 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.698669 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerStarted","Data":"7b6e9aedb8b87111e0a2d026a3b512a7a25ad929e6655c3c5ef50c8956795a7a"} Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.699317 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerStarted","Data":"0d7b4d0fe407ac9b1a830786907ac3debfc24d42ea47c970f99fbc9b7a2a8451"} Nov 29 07:27:30 crc kubenswrapper[4731]: W1129 07:27:30.923373 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf63168b_e97f_4284_bdd4_d2547810144c.slice/crio-ebeb51c844dd03d57bd93298db7788ea15873f431fd02739d5b05dd8445e6bc6 WatchSource:0}: Error finding container ebeb51c844dd03d57bd93298db7788ea15873f431fd02739d5b05dd8445e6bc6: Status 404 returned error can't find the container with id ebeb51c844dd03d57bd93298db7788ea15873f431fd02739d5b05dd8445e6bc6 Nov 29 07:27:30 crc kubenswrapper[4731]: I1129 07:27:30.947179 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:27:31 crc kubenswrapper[4731]: I1129 07:27:31.114101 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:31 crc kubenswrapper[4731]: I1129 07:27:31.716415 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af63168b-e97f-4284-bdd4-d2547810144c","Type":"ContainerStarted","Data":"ebeb51c844dd03d57bd93298db7788ea15873f431fd02739d5b05dd8445e6bc6"} Nov 29 07:27:31 crc kubenswrapper[4731]: I1129 07:27:31.726089 4731 generic.go:334] "Generic (PLEG): container finished" podID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerID="f28c3b1ecdc62f22eb2d42b0bcea85656e456557d607ef2880b499bae1d325ee" exitCode=0 Nov 29 07:27:31 crc kubenswrapper[4731]: I1129 07:27:31.726155 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4a309b6-ff09-434a-8d65-9dd888a25dab","Type":"ContainerDied","Data":"f28c3b1ecdc62f22eb2d42b0bcea85656e456557d607ef2880b499bae1d325ee"} Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.700154 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.748390 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af63168b-e97f-4284-bdd4-d2547810144c","Type":"ContainerStarted","Data":"08419d6093701f2a9de798bfe301713fdfa7d5d7194e60e63341e6ab45fea068"} Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.754354 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4a309b6-ff09-434a-8d65-9dd888a25dab","Type":"ContainerDied","Data":"d4ccf3ba6d17400c32f7b35274f26432667afe2f072c8dfedaf97cae696a249d"} Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.754414 4731 scope.go:117] "RemoveContainer" containerID="f28c3b1ecdc62f22eb2d42b0bcea85656e456557d607ef2880b499bae1d325ee" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.754618 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.838083 4731 scope.go:117] "RemoveContainer" containerID="e18b392c7a703626b4da6b904a8ebfbf34639349dcf46fbb7f5eed20217b8fc0" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.882040 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-httpd-run\") pod \"d4a309b6-ff09-434a-8d65-9dd888a25dab\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.882292 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-combined-ca-bundle\") pod \"d4a309b6-ff09-434a-8d65-9dd888a25dab\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.882344 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8mhm\" (UniqueName: \"kubernetes.io/projected/d4a309b6-ff09-434a-8d65-9dd888a25dab-kube-api-access-v8mhm\") pod \"d4a309b6-ff09-434a-8d65-9dd888a25dab\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.882382 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-scripts\") pod \"d4a309b6-ff09-434a-8d65-9dd888a25dab\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.882481 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-config-data\") pod \"d4a309b6-ff09-434a-8d65-9dd888a25dab\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.882525 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-internal-tls-certs\") pod \"d4a309b6-ff09-434a-8d65-9dd888a25dab\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.882645 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-logs\") pod \"d4a309b6-ff09-434a-8d65-9dd888a25dab\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.882748 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"d4a309b6-ff09-434a-8d65-9dd888a25dab\" (UID: \"d4a309b6-ff09-434a-8d65-9dd888a25dab\") " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.883934 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d4a309b6-ff09-434a-8d65-9dd888a25dab" (UID: "d4a309b6-ff09-434a-8d65-9dd888a25dab"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.884033 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-logs" (OuterVolumeSpecName: "logs") pod "d4a309b6-ff09-434a-8d65-9dd888a25dab" (UID: "d4a309b6-ff09-434a-8d65-9dd888a25dab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.898170 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "d4a309b6-ff09-434a-8d65-9dd888a25dab" (UID: "d4a309b6-ff09-434a-8d65-9dd888a25dab"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.898400 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a309b6-ff09-434a-8d65-9dd888a25dab-kube-api-access-v8mhm" (OuterVolumeSpecName: "kube-api-access-v8mhm") pod "d4a309b6-ff09-434a-8d65-9dd888a25dab" (UID: "d4a309b6-ff09-434a-8d65-9dd888a25dab"). InnerVolumeSpecName "kube-api-access-v8mhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.902372 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-scripts" (OuterVolumeSpecName: "scripts") pod "d4a309b6-ff09-434a-8d65-9dd888a25dab" (UID: "d4a309b6-ff09-434a-8d65-9dd888a25dab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.978191 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4a309b6-ff09-434a-8d65-9dd888a25dab" (UID: "d4a309b6-ff09-434a-8d65-9dd888a25dab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.995475 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.995542 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.995562 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4a309b6-ff09-434a-8d65-9dd888a25dab-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.995593 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8mhm\" (UniqueName: \"kubernetes.io/projected/d4a309b6-ff09-434a-8d65-9dd888a25dab-kube-api-access-v8mhm\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.995609 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:32 crc kubenswrapper[4731]: I1129 07:27:32.995620 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.003235 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.003330 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.015677 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-config-data" (OuterVolumeSpecName: "config-data") pod "d4a309b6-ff09-434a-8d65-9dd888a25dab" (UID: "d4a309b6-ff09-434a-8d65-9dd888a25dab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.017802 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4a309b6-ff09-434a-8d65-9dd888a25dab" (UID: "d4a309b6-ff09-434a-8d65-9dd888a25dab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.030601 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.097947 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.098531 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.098546 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a309b6-ff09-434a-8d65-9dd888a25dab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.116656 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.152132 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.178547 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:27:33 crc kubenswrapper[4731]: E1129 07:27:33.179313 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerName="glance-log" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.179337 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerName="glance-log" Nov 29 07:27:33 crc kubenswrapper[4731]: E1129 07:27:33.179412 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerName="glance-httpd" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.179427 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerName="glance-httpd" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.179714 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerName="glance-log" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.179740 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" containerName="glance-httpd" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.181219 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.185384 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.186612 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.198305 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.303898 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.303951 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.303984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18bdebee-183b-4f16-a806-f6f6437424c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.304004 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxw2n\" (UniqueName: \"kubernetes.io/projected/18bdebee-183b-4f16-a806-f6f6437424c4-kube-api-access-hxw2n\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.304087 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.304124 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.304163 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18bdebee-183b-4f16-a806-f6f6437424c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.304223 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.405983 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.406052 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.406097 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18bdebee-183b-4f16-a806-f6f6437424c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.406132 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxw2n\" (UniqueName: \"kubernetes.io/projected/18bdebee-183b-4f16-a806-f6f6437424c4-kube-api-access-hxw2n\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.407197 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18bdebee-183b-4f16-a806-f6f6437424c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.407559 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.407767 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.407892 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18bdebee-183b-4f16-a806-f6f6437424c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.408155 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.410189 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.411050 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18bdebee-183b-4f16-a806-f6f6437424c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.418460 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.425478 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.438085 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.438870 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bdebee-183b-4f16-a806-f6f6437424c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.467637 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.476382 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxw2n\" (UniqueName: \"kubernetes.io/projected/18bdebee-183b-4f16-a806-f6f6437424c4-kube-api-access-hxw2n\") pod \"glance-default-internal-api-0\" (UID: \"18bdebee-183b-4f16-a806-f6f6437424c4\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.537758 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.856653 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a309b6-ff09-434a-8d65-9dd888a25dab" path="/var/lib/kubelet/pods/d4a309b6-ff09-434a-8d65-9dd888a25dab/volumes" Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.857858 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af63168b-e97f-4284-bdd4-d2547810144c","Type":"ContainerStarted","Data":"a1bce8f8df6a3d6f5aad905b04dc7d16bf2bd59f04c5aba9f053e74b0876fe20"} Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.866639 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerStarted","Data":"0f5c3b628685fc8e9cf87e4e1a54c03e774cff8ca937311142cbcd92661921c9"} Nov 29 07:27:33 crc kubenswrapper[4731]: I1129 07:27:33.930397 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.930338052 podStartE2EDuration="4.930338052s" podCreationTimestamp="2025-11-29 07:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:33.878761289 +0000 UTC m=+1292.769122412" watchObservedRunningTime="2025-11-29 07:27:33.930338052 +0000 UTC m=+1292.820699155" Nov 29 07:27:34 crc kubenswrapper[4731]: W1129 07:27:34.248146 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18bdebee_183b_4f16_a806_f6f6437424c4.slice/crio-137a8bfadd8a275dbe3b1eb03254bc937e8c44c7bff3ce4d710820ec12dae25c WatchSource:0}: Error finding container 137a8bfadd8a275dbe3b1eb03254bc937e8c44c7bff3ce4d710820ec12dae25c: Status 404 returned error can't find the container with id 137a8bfadd8a275dbe3b1eb03254bc937e8c44c7bff3ce4d710820ec12dae25c Nov 29 07:27:34 crc kubenswrapper[4731]: I1129 07:27:34.248463 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:27:34 crc kubenswrapper[4731]: E1129 07:27:34.550382 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad9b3a1d_2698_405e_b94a_45d96efd0400.slice/crio-2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511\": RecentStats: unable to find data in memory cache]" Nov 29 07:27:34 crc kubenswrapper[4731]: I1129 07:27:34.892874 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"18bdebee-183b-4f16-a806-f6f6437424c4","Type":"ContainerStarted","Data":"b080d99a47e249a1f6e47ad88fb12802a1ca6297e2e5e969d46d9cc2ff824d1b"} Nov 29 07:27:34 crc kubenswrapper[4731]: I1129 07:27:34.892954 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"18bdebee-183b-4f16-a806-f6f6437424c4","Type":"ContainerStarted","Data":"137a8bfadd8a275dbe3b1eb03254bc937e8c44c7bff3ce4d710820ec12dae25c"} Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.905807 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"18bdebee-183b-4f16-a806-f6f6437424c4","Type":"ContainerStarted","Data":"0d65df5270fdd0107456049c1cf8743cc9c0dd2344befc19110ca5e6c9afcf8f"} Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.910309 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerStarted","Data":"c1d451b833cf4221fc83c1b215a248edd303981ea8fa4858d669450d2967832b"} Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.910640 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="ceilometer-central-agent" containerID="cri-o://0d7b4d0fe407ac9b1a830786907ac3debfc24d42ea47c970f99fbc9b7a2a8451" gracePeriod=30 Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.911032 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.911145 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="proxy-httpd" containerID="cri-o://c1d451b833cf4221fc83c1b215a248edd303981ea8fa4858d669450d2967832b" gracePeriod=30 Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.911259 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="sg-core" containerID="cri-o://0f5c3b628685fc8e9cf87e4e1a54c03e774cff8ca937311142cbcd92661921c9" gracePeriod=30 Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.911331 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="ceilometer-notification-agent" containerID="cri-o://7b6e9aedb8b87111e0a2d026a3b512a7a25ad929e6655c3c5ef50c8956795a7a" gracePeriod=30 Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.953483 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.953449575 podStartE2EDuration="2.953449575s" podCreationTimestamp="2025-11-29 07:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:35.937856727 +0000 UTC m=+1294.828217830" watchObservedRunningTime="2025-11-29 07:27:35.953449575 +0000 UTC m=+1294.843810668" Nov 29 07:27:35 crc kubenswrapper[4731]: I1129 07:27:35.976045 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.460342888 podStartE2EDuration="8.976021014s" podCreationTimestamp="2025-11-29 07:27:27 +0000 UTC" firstStartedPulling="2025-11-29 07:27:28.769190461 +0000 UTC m=+1287.659551564" lastFinishedPulling="2025-11-29 07:27:35.284868587 +0000 UTC m=+1294.175229690" observedRunningTime="2025-11-29 07:27:35.972024469 +0000 UTC m=+1294.862385582" watchObservedRunningTime="2025-11-29 07:27:35.976021014 +0000 UTC m=+1294.866382117" Nov 29 07:27:36 crc kubenswrapper[4731]: I1129 07:27:36.932015 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerID="c1d451b833cf4221fc83c1b215a248edd303981ea8fa4858d669450d2967832b" exitCode=0 Nov 29 07:27:36 crc kubenswrapper[4731]: I1129 07:27:36.932415 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerID="0f5c3b628685fc8e9cf87e4e1a54c03e774cff8ca937311142cbcd92661921c9" exitCode=2 Nov 29 07:27:36 crc kubenswrapper[4731]: I1129 07:27:36.932424 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerID="7b6e9aedb8b87111e0a2d026a3b512a7a25ad929e6655c3c5ef50c8956795a7a" exitCode=0 Nov 29 07:27:36 crc kubenswrapper[4731]: I1129 07:27:36.932771 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerDied","Data":"c1d451b833cf4221fc83c1b215a248edd303981ea8fa4858d669450d2967832b"} Nov 29 07:27:36 crc kubenswrapper[4731]: I1129 07:27:36.932865 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerDied","Data":"0f5c3b628685fc8e9cf87e4e1a54c03e774cff8ca937311142cbcd92661921c9"} Nov 29 07:27:36 crc kubenswrapper[4731]: I1129 07:27:36.932881 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerDied","Data":"7b6e9aedb8b87111e0a2d026a3b512a7a25ad929e6655c3c5ef50c8956795a7a"} Nov 29 07:27:40 crc kubenswrapper[4731]: I1129 07:27:40.173634 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:27:40 crc kubenswrapper[4731]: I1129 07:27:40.174516 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:27:40 crc kubenswrapper[4731]: I1129 07:27:40.294495 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:27:40 crc kubenswrapper[4731]: I1129 07:27:40.417895 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:27:40 crc kubenswrapper[4731]: I1129 07:27:40.977196 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:27:40 crc kubenswrapper[4731]: I1129 07:27:40.977591 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:27:41 crc kubenswrapper[4731]: I1129 07:27:41.917047 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6mk6z"] Nov 29 07:27:41 crc kubenswrapper[4731]: I1129 07:27:41.923871 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:41 crc kubenswrapper[4731]: I1129 07:27:41.938637 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6mk6z"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.042645 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-76xc2"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.043547 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slggh\" (UniqueName: \"kubernetes.io/projected/c7590e82-ed2d-42d4-ae30-b581dc4517b9-kube-api-access-slggh\") pod \"nova-api-db-create-6mk6z\" (UID: \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\") " pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.043717 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7590e82-ed2d-42d4-ae30-b581dc4517b9-operator-scripts\") pod \"nova-api-db-create-6mk6z\" (UID: \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\") " pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.044719 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.061670 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-76xc2"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.125372 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e0d1-account-create-update-7ppfv"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.127433 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.139119 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.146797 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e0d1-account-create-update-7ppfv"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.148294 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7590e82-ed2d-42d4-ae30-b581dc4517b9-operator-scripts\") pod \"nova-api-db-create-6mk6z\" (UID: \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\") " pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.148427 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aea23e96-8b0c-413c-9240-80f8ecd2af01-operator-scripts\") pod \"nova-cell0-db-create-76xc2\" (UID: \"aea23e96-8b0c-413c-9240-80f8ecd2af01\") " pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.148502 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87tpk\" (UniqueName: \"kubernetes.io/projected/aea23e96-8b0c-413c-9240-80f8ecd2af01-kube-api-access-87tpk\") pod \"nova-cell0-db-create-76xc2\" (UID: \"aea23e96-8b0c-413c-9240-80f8ecd2af01\") " pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.148673 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slggh\" (UniqueName: \"kubernetes.io/projected/c7590e82-ed2d-42d4-ae30-b581dc4517b9-kube-api-access-slggh\") pod \"nova-api-db-create-6mk6z\" (UID: \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\") " pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.149764 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7590e82-ed2d-42d4-ae30-b581dc4517b9-operator-scripts\") pod \"nova-api-db-create-6mk6z\" (UID: \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\") " pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.189707 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slggh\" (UniqueName: \"kubernetes.io/projected/c7590e82-ed2d-42d4-ae30-b581dc4517b9-kube-api-access-slggh\") pod \"nova-api-db-create-6mk6z\" (UID: \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\") " pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.227648 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jtd4s"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.229535 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.241780 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jtd4s"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.258630 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hjl\" (UniqueName: \"kubernetes.io/projected/ec906d2b-9805-4e9b-8273-80a3488c76e5-kube-api-access-94hjl\") pod \"nova-api-e0d1-account-create-update-7ppfv\" (UID: \"ec906d2b-9805-4e9b-8273-80a3488c76e5\") " pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.259106 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec906d2b-9805-4e9b-8273-80a3488c76e5-operator-scripts\") pod \"nova-api-e0d1-account-create-update-7ppfv\" (UID: \"ec906d2b-9805-4e9b-8273-80a3488c76e5\") " pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.259596 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aea23e96-8b0c-413c-9240-80f8ecd2af01-operator-scripts\") pod \"nova-cell0-db-create-76xc2\" (UID: \"aea23e96-8b0c-413c-9240-80f8ecd2af01\") " pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.259809 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87tpk\" (UniqueName: \"kubernetes.io/projected/aea23e96-8b0c-413c-9240-80f8ecd2af01-kube-api-access-87tpk\") pod \"nova-cell0-db-create-76xc2\" (UID: \"aea23e96-8b0c-413c-9240-80f8ecd2af01\") " pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.261171 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aea23e96-8b0c-413c-9240-80f8ecd2af01-operator-scripts\") pod \"nova-cell0-db-create-76xc2\" (UID: \"aea23e96-8b0c-413c-9240-80f8ecd2af01\") " pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.262219 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.288600 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87tpk\" (UniqueName: \"kubernetes.io/projected/aea23e96-8b0c-413c-9240-80f8ecd2af01-kube-api-access-87tpk\") pod \"nova-cell0-db-create-76xc2\" (UID: \"aea23e96-8b0c-413c-9240-80f8ecd2af01\") " pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.333915 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c9a4-account-create-update-dwcqz"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.338216 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.347726 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.372524 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-operator-scripts\") pod \"nova-cell1-db-create-jtd4s\" (UID: \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\") " pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.372614 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dww5z\" (UniqueName: \"kubernetes.io/projected/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-kube-api-access-dww5z\") pod \"nova-cell1-db-create-jtd4s\" (UID: \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\") " pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.372718 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hjl\" (UniqueName: \"kubernetes.io/projected/ec906d2b-9805-4e9b-8273-80a3488c76e5-kube-api-access-94hjl\") pod \"nova-api-e0d1-account-create-update-7ppfv\" (UID: \"ec906d2b-9805-4e9b-8273-80a3488c76e5\") " pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.372790 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec906d2b-9805-4e9b-8273-80a3488c76e5-operator-scripts\") pod \"nova-api-e0d1-account-create-update-7ppfv\" (UID: \"ec906d2b-9805-4e9b-8273-80a3488c76e5\") " pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.375210 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec906d2b-9805-4e9b-8273-80a3488c76e5-operator-scripts\") pod \"nova-api-e0d1-account-create-update-7ppfv\" (UID: \"ec906d2b-9805-4e9b-8273-80a3488c76e5\") " pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.377585 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.380456 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c9a4-account-create-update-dwcqz"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.402782 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hjl\" (UniqueName: \"kubernetes.io/projected/ec906d2b-9805-4e9b-8273-80a3488c76e5-kube-api-access-94hjl\") pod \"nova-api-e0d1-account-create-update-7ppfv\" (UID: \"ec906d2b-9805-4e9b-8273-80a3488c76e5\") " pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.462669 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.475460 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6p9n\" (UniqueName: \"kubernetes.io/projected/37f74f3d-e81b-445f-b4df-09f17e389b52-kube-api-access-r6p9n\") pod \"nova-cell0-c9a4-account-create-update-dwcqz\" (UID: \"37f74f3d-e81b-445f-b4df-09f17e389b52\") " pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.475907 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-operator-scripts\") pod \"nova-cell1-db-create-jtd4s\" (UID: \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\") " pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.475957 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dww5z\" (UniqueName: \"kubernetes.io/projected/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-kube-api-access-dww5z\") pod \"nova-cell1-db-create-jtd4s\" (UID: \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\") " pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.475992 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f74f3d-e81b-445f-b4df-09f17e389b52-operator-scripts\") pod \"nova-cell0-c9a4-account-create-update-dwcqz\" (UID: \"37f74f3d-e81b-445f-b4df-09f17e389b52\") " pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.478116 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-operator-scripts\") pod \"nova-cell1-db-create-jtd4s\" (UID: \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\") " pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.534546 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2711-account-create-update-b2w2h"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.535623 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dww5z\" (UniqueName: \"kubernetes.io/projected/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-kube-api-access-dww5z\") pod \"nova-cell1-db-create-jtd4s\" (UID: \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\") " pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.536761 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.542540 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.572079 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2711-account-create-update-b2w2h"] Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.575594 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.578747 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f74f3d-e81b-445f-b4df-09f17e389b52-operator-scripts\") pod \"nova-cell0-c9a4-account-create-update-dwcqz\" (UID: \"37f74f3d-e81b-445f-b4df-09f17e389b52\") " pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.578844 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6p9n\" (UniqueName: \"kubernetes.io/projected/37f74f3d-e81b-445f-b4df-09f17e389b52-kube-api-access-r6p9n\") pod \"nova-cell0-c9a4-account-create-update-dwcqz\" (UID: \"37f74f3d-e81b-445f-b4df-09f17e389b52\") " pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.580727 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f74f3d-e81b-445f-b4df-09f17e389b52-operator-scripts\") pod \"nova-cell0-c9a4-account-create-update-dwcqz\" (UID: \"37f74f3d-e81b-445f-b4df-09f17e389b52\") " pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.610875 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6p9n\" (UniqueName: \"kubernetes.io/projected/37f74f3d-e81b-445f-b4df-09f17e389b52-kube-api-access-r6p9n\") pod \"nova-cell0-c9a4-account-create-update-dwcqz\" (UID: \"37f74f3d-e81b-445f-b4df-09f17e389b52\") " pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.652169 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.681886 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5720290-7d84-4d00-bf6a-8665ccc9cd09-operator-scripts\") pod \"nova-cell1-2711-account-create-update-b2w2h\" (UID: \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\") " pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.682176 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-222cv\" (UniqueName: \"kubernetes.io/projected/e5720290-7d84-4d00-bf6a-8665ccc9cd09-kube-api-access-222cv\") pod \"nova-cell1-2711-account-create-update-b2w2h\" (UID: \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\") " pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.784680 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5720290-7d84-4d00-bf6a-8665ccc9cd09-operator-scripts\") pod \"nova-cell1-2711-account-create-update-b2w2h\" (UID: \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\") " pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.785218 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-222cv\" (UniqueName: \"kubernetes.io/projected/e5720290-7d84-4d00-bf6a-8665ccc9cd09-kube-api-access-222cv\") pod \"nova-cell1-2711-account-create-update-b2w2h\" (UID: \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\") " pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.786785 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5720290-7d84-4d00-bf6a-8665ccc9cd09-operator-scripts\") pod \"nova-cell1-2711-account-create-update-b2w2h\" (UID: \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\") " pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:42 crc kubenswrapper[4731]: I1129 07:27:42.826553 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-222cv\" (UniqueName: \"kubernetes.io/projected/e5720290-7d84-4d00-bf6a-8665ccc9cd09-kube-api-access-222cv\") pod \"nova-cell1-2711-account-create-update-b2w2h\" (UID: \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\") " pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.005918 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.056221 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e0d1-account-create-update-7ppfv"] Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.089071 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6mk6z"] Nov 29 07:27:43 crc kubenswrapper[4731]: W1129 07:27:43.126402 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7590e82_ed2d_42d4_ae30_b581dc4517b9.slice/crio-1684faab425888d07ca8dbb9df3ebf53d3750d491519987281d6535e4c1270a3 WatchSource:0}: Error finding container 1684faab425888d07ca8dbb9df3ebf53d3750d491519987281d6535e4c1270a3: Status 404 returned error can't find the container with id 1684faab425888d07ca8dbb9df3ebf53d3750d491519987281d6535e4c1270a3 Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.148480 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-76xc2"] Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.273884 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jtd4s"] Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.538661 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.538708 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.589824 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c9a4-account-create-update-dwcqz"] Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.659480 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:43 crc kubenswrapper[4731]: I1129 07:27:43.703599 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.036523 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2711-account-create-update-b2w2h"] Nov 29 07:27:44 crc kubenswrapper[4731]: W1129 07:27:44.037097 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5720290_7d84_4d00_bf6a_8665ccc9cd09.slice/crio-dd4900b18cda2999a01219bee28f383681a0ddcb98117cb69840639f12e83b13 WatchSource:0}: Error finding container dd4900b18cda2999a01219bee28f383681a0ddcb98117cb69840639f12e83b13: Status 404 returned error can't find the container with id dd4900b18cda2999a01219bee28f383681a0ddcb98117cb69840639f12e83b13 Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.102833 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jtd4s" event={"ID":"d9d5f400-fe07-4f0f-ae45-b6055e4908fc","Type":"ContainerStarted","Data":"e58d7c9fb24bf46e14a38d993c8837fa703d0460967ab94ec3e49b5dc1bd550a"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.104470 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" event={"ID":"37f74f3d-e81b-445f-b4df-09f17e389b52","Type":"ContainerStarted","Data":"58c0b766c1a559f99133c9a92a4936a42cd0c9af8dd4c53440138b90d520141a"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.105780 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2711-account-create-update-b2w2h" event={"ID":"e5720290-7d84-4d00-bf6a-8665ccc9cd09","Type":"ContainerStarted","Data":"dd4900b18cda2999a01219bee28f383681a0ddcb98117cb69840639f12e83b13"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.113607 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-76xc2" event={"ID":"aea23e96-8b0c-413c-9240-80f8ecd2af01","Type":"ContainerStarted","Data":"ef0ca46537a145f02f24cd242cabc45acaf029a80f5b8961b3fa4a112fe23a9d"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.113675 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-76xc2" event={"ID":"aea23e96-8b0c-413c-9240-80f8ecd2af01","Type":"ContainerStarted","Data":"c4e31e0d52188af6338662841c9fdae79c68517603ac1c722e18d74b199bbf5a"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.138989 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" event={"ID":"ec906d2b-9805-4e9b-8273-80a3488c76e5","Type":"ContainerStarted","Data":"cbc59b9d53c41b00bb5058b86e04f88cdae10a258658edd5f403067b3beff8c6"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.139066 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" event={"ID":"ec906d2b-9805-4e9b-8273-80a3488c76e5","Type":"ContainerStarted","Data":"e54364f5c9cc949d925bde2121b802654e3cff6550525f0e22feff50a494da90"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.140048 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-76xc2" podStartSLOduration=3.140018227 podStartE2EDuration="3.140018227s" podCreationTimestamp="2025-11-29 07:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:44.133659464 +0000 UTC m=+1303.024020607" watchObservedRunningTime="2025-11-29 07:27:44.140018227 +0000 UTC m=+1303.030379330" Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.151054 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6mk6z" event={"ID":"c7590e82-ed2d-42d4-ae30-b581dc4517b9","Type":"ContainerStarted","Data":"1d7a432469f2d12aa10d06a0b82a91292de909947f98fcb3665a73bfbea52bf5"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.151381 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6mk6z" event={"ID":"c7590e82-ed2d-42d4-ae30-b581dc4517b9","Type":"ContainerStarted","Data":"1684faab425888d07ca8dbb9df3ebf53d3750d491519987281d6535e4c1270a3"} Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.151486 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.152073 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.152753 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.152837 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.167723 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" podStartSLOduration=2.167697283 podStartE2EDuration="2.167697283s" podCreationTimestamp="2025-11-29 07:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:44.159105136 +0000 UTC m=+1303.049466239" watchObservedRunningTime="2025-11-29 07:27:44.167697283 +0000 UTC m=+1303.058058386" Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.200111 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-6mk6z" podStartSLOduration=3.200086334 podStartE2EDuration="3.200086334s" podCreationTimestamp="2025-11-29 07:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:27:44.195652817 +0000 UTC m=+1303.086013920" watchObservedRunningTime="2025-11-29 07:27:44.200086334 +0000 UTC m=+1303.090447437" Nov 29 07:27:44 crc kubenswrapper[4731]: I1129 07:27:44.491594 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:27:44 crc kubenswrapper[4731]: E1129 07:27:44.850857 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad9b3a1d_2698_405e_b94a_45d96efd0400.slice/crio-2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7590e82_ed2d_42d4_ae30_b581dc4517b9.slice/crio-conmon-1d7a432469f2d12aa10d06a0b82a91292de909947f98fcb3665a73bfbea52bf5.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.175722 4731 generic.go:334] "Generic (PLEG): container finished" podID="d9d5f400-fe07-4f0f-ae45-b6055e4908fc" containerID="674f7f4b2e738914d0a6f19b7026f8bfdf2616bd8b47ab5718a9e55b0f65f98d" exitCode=0 Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.176126 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jtd4s" event={"ID":"d9d5f400-fe07-4f0f-ae45-b6055e4908fc","Type":"ContainerDied","Data":"674f7f4b2e738914d0a6f19b7026f8bfdf2616bd8b47ab5718a9e55b0f65f98d"} Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.180876 4731 generic.go:334] "Generic (PLEG): container finished" podID="37f74f3d-e81b-445f-b4df-09f17e389b52" containerID="c0ce0ef79c86515907bc3adb596fadd91c5c5e5faa7e2b35ef9594cebf198ff0" exitCode=0 Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.181262 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" event={"ID":"37f74f3d-e81b-445f-b4df-09f17e389b52","Type":"ContainerDied","Data":"c0ce0ef79c86515907bc3adb596fadd91c5c5e5faa7e2b35ef9594cebf198ff0"} Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.184759 4731 generic.go:334] "Generic (PLEG): container finished" podID="e5720290-7d84-4d00-bf6a-8665ccc9cd09" containerID="26fe4eb63f9dca14c377d6ff5b4f8ccebd002db978d870dbce923a44b8d8f98e" exitCode=0 Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.184855 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2711-account-create-update-b2w2h" event={"ID":"e5720290-7d84-4d00-bf6a-8665ccc9cd09","Type":"ContainerDied","Data":"26fe4eb63f9dca14c377d6ff5b4f8ccebd002db978d870dbce923a44b8d8f98e"} Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.188681 4731 generic.go:334] "Generic (PLEG): container finished" podID="aea23e96-8b0c-413c-9240-80f8ecd2af01" containerID="ef0ca46537a145f02f24cd242cabc45acaf029a80f5b8961b3fa4a112fe23a9d" exitCode=0 Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.188790 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-76xc2" event={"ID":"aea23e96-8b0c-413c-9240-80f8ecd2af01","Type":"ContainerDied","Data":"ef0ca46537a145f02f24cd242cabc45acaf029a80f5b8961b3fa4a112fe23a9d"} Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.197833 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerID="0d7b4d0fe407ac9b1a830786907ac3debfc24d42ea47c970f99fbc9b7a2a8451" exitCode=0 Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.197946 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerDied","Data":"0d7b4d0fe407ac9b1a830786907ac3debfc24d42ea47c970f99fbc9b7a2a8451"} Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.202671 4731 generic.go:334] "Generic (PLEG): container finished" podID="ec906d2b-9805-4e9b-8273-80a3488c76e5" containerID="cbc59b9d53c41b00bb5058b86e04f88cdae10a258658edd5f403067b3beff8c6" exitCode=0 Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.202780 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" event={"ID":"ec906d2b-9805-4e9b-8273-80a3488c76e5","Type":"ContainerDied","Data":"cbc59b9d53c41b00bb5058b86e04f88cdae10a258658edd5f403067b3beff8c6"} Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.205624 4731 generic.go:334] "Generic (PLEG): container finished" podID="c7590e82-ed2d-42d4-ae30-b581dc4517b9" containerID="1d7a432469f2d12aa10d06a0b82a91292de909947f98fcb3665a73bfbea52bf5" exitCode=0 Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.207023 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6mk6z" event={"ID":"c7590e82-ed2d-42d4-ae30-b581dc4517b9","Type":"ContainerDied","Data":"1d7a432469f2d12aa10d06a0b82a91292de909947f98fcb3665a73bfbea52bf5"} Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.579841 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.676463 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-log-httpd\") pod \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.676890 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-ceilometer-tls-certs\") pod \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.677044 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-sg-core-conf-yaml\") pod \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.677161 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-config-data\") pod \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.677422 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-combined-ca-bundle\") pod \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.677544 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdwfw\" (UniqueName: \"kubernetes.io/projected/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-kube-api-access-gdwfw\") pod \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.678167 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-run-httpd\") pod \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.678310 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-scripts\") pod \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\" (UID: \"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22\") " Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.677444 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" (UID: "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.678894 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" (UID: "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.679541 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.680194 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.685642 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-scripts" (OuterVolumeSpecName: "scripts") pod "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" (UID: "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.687673 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-kube-api-access-gdwfw" (OuterVolumeSpecName: "kube-api-access-gdwfw") pod "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" (UID: "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22"). InnerVolumeSpecName "kube-api-access-gdwfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.735031 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" (UID: "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.755719 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" (UID: "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.783285 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdwfw\" (UniqueName: \"kubernetes.io/projected/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-kube-api-access-gdwfw\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.783328 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.783343 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.783354 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.925273 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" (UID: "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:45 crc kubenswrapper[4731]: I1129 07:27:45.988087 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-config-data" (OuterVolumeSpecName: "config-data") pod "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" (UID: "7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.014772 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.014812 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.220796 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22","Type":"ContainerDied","Data":"11940e6f80732a01290a816c7a92bbac94d142d61337ab774af6b9518f8884f2"} Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.221269 4731 scope.go:117] "RemoveContainer" containerID="c1d451b833cf4221fc83c1b215a248edd303981ea8fa4858d669450d2967832b" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.220977 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.221340 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.221055 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.277930 4731 scope.go:117] "RemoveContainer" containerID="0f5c3b628685fc8e9cf87e4e1a54c03e774cff8ca937311142cbcd92661921c9" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.281665 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.315650 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.334320 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:46 crc kubenswrapper[4731]: E1129 07:27:46.335044 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="proxy-httpd" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.335088 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="proxy-httpd" Nov 29 07:27:46 crc kubenswrapper[4731]: E1129 07:27:46.335119 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="ceilometer-notification-agent" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.335126 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="ceilometer-notification-agent" Nov 29 07:27:46 crc kubenswrapper[4731]: E1129 07:27:46.335143 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="ceilometer-central-agent" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.335150 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="ceilometer-central-agent" Nov 29 07:27:46 crc kubenswrapper[4731]: E1129 07:27:46.335168 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="sg-core" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.335175 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="sg-core" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.335454 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="sg-core" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.335470 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="ceilometer-central-agent" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.335489 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="ceilometer-notification-agent" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.335502 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" containerName="proxy-httpd" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.337798 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.347241 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.349083 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.349611 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.356875 4731 scope.go:117] "RemoveContainer" containerID="7b6e9aedb8b87111e0a2d026a3b512a7a25ad929e6655c3c5ef50c8956795a7a" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.373745 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.441816 4731 scope.go:117] "RemoveContainer" containerID="0d7b4d0fe407ac9b1a830786907ac3debfc24d42ea47c970f99fbc9b7a2a8451" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.530410 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-log-httpd\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.531559 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.531733 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.531965 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.532126 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-config-data\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.532159 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-scripts\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.532224 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsjb\" (UniqueName: \"kubernetes.io/projected/a91a845e-a032-4109-91db-3ac60a4dc1a3-kube-api-access-zvsjb\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.532284 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-run-httpd\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.634916 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.635036 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-config-data\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.635075 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-scripts\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.635107 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsjb\" (UniqueName: \"kubernetes.io/projected/a91a845e-a032-4109-91db-3ac60a4dc1a3-kube-api-access-zvsjb\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.635394 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-run-httpd\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.635487 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-log-httpd\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.635529 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.635612 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.638965 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-log-httpd\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.639464 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-run-httpd\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.646309 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-scripts\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.657120 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsjb\" (UniqueName: \"kubernetes.io/projected/a91a845e-a032-4109-91db-3ac60a4dc1a3-kube-api-access-zvsjb\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.657511 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.657527 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.658083 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.660862 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-config-data\") pod \"ceilometer-0\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.669959 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.719987 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.721160 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.863748 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:46 crc kubenswrapper[4731]: I1129 07:27:46.866934 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.047250 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94hjl\" (UniqueName: \"kubernetes.io/projected/ec906d2b-9805-4e9b-8273-80a3488c76e5-kube-api-access-94hjl\") pod \"ec906d2b-9805-4e9b-8273-80a3488c76e5\" (UID: \"ec906d2b-9805-4e9b-8273-80a3488c76e5\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.047691 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5720290-7d84-4d00-bf6a-8665ccc9cd09-operator-scripts\") pod \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\" (UID: \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.047823 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec906d2b-9805-4e9b-8273-80a3488c76e5-operator-scripts\") pod \"ec906d2b-9805-4e9b-8273-80a3488c76e5\" (UID: \"ec906d2b-9805-4e9b-8273-80a3488c76e5\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.047976 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-222cv\" (UniqueName: \"kubernetes.io/projected/e5720290-7d84-4d00-bf6a-8665ccc9cd09-kube-api-access-222cv\") pod \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\" (UID: \"e5720290-7d84-4d00-bf6a-8665ccc9cd09\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.051533 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5720290-7d84-4d00-bf6a-8665ccc9cd09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5720290-7d84-4d00-bf6a-8665ccc9cd09" (UID: "e5720290-7d84-4d00-bf6a-8665ccc9cd09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.052408 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec906d2b-9805-4e9b-8273-80a3488c76e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec906d2b-9805-4e9b-8273-80a3488c76e5" (UID: "ec906d2b-9805-4e9b-8273-80a3488c76e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.058674 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5720290-7d84-4d00-bf6a-8665ccc9cd09-kube-api-access-222cv" (OuterVolumeSpecName: "kube-api-access-222cv") pod "e5720290-7d84-4d00-bf6a-8665ccc9cd09" (UID: "e5720290-7d84-4d00-bf6a-8665ccc9cd09"). InnerVolumeSpecName "kube-api-access-222cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.073557 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.076753 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec906d2b-9805-4e9b-8273-80a3488c76e5-kube-api-access-94hjl" (OuterVolumeSpecName: "kube-api-access-94hjl") pod "ec906d2b-9805-4e9b-8273-80a3488c76e5" (UID: "ec906d2b-9805-4e9b-8273-80a3488c76e5"). InnerVolumeSpecName "kube-api-access-94hjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.082404 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.178720 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5720290-7d84-4d00-bf6a-8665ccc9cd09-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.178775 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec906d2b-9805-4e9b-8273-80a3488c76e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.178787 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-222cv\" (UniqueName: \"kubernetes.io/projected/e5720290-7d84-4d00-bf6a-8665ccc9cd09-kube-api-access-222cv\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.178797 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94hjl\" (UniqueName: \"kubernetes.io/projected/ec906d2b-9805-4e9b-8273-80a3488c76e5-kube-api-access-94hjl\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.280750 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slggh\" (UniqueName: \"kubernetes.io/projected/c7590e82-ed2d-42d4-ae30-b581dc4517b9-kube-api-access-slggh\") pod \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\" (UID: \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.281928 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7590e82-ed2d-42d4-ae30-b581dc4517b9-operator-scripts\") pod \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\" (UID: \"c7590e82-ed2d-42d4-ae30-b581dc4517b9\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.287581 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7590e82-ed2d-42d4-ae30-b581dc4517b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7590e82-ed2d-42d4-ae30-b581dc4517b9" (UID: "c7590e82-ed2d-42d4-ae30-b581dc4517b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.290479 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6p9n\" (UniqueName: \"kubernetes.io/projected/37f74f3d-e81b-445f-b4df-09f17e389b52-kube-api-access-r6p9n\") pod \"37f74f3d-e81b-445f-b4df-09f17e389b52\" (UID: \"37f74f3d-e81b-445f-b4df-09f17e389b52\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.290854 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f74f3d-e81b-445f-b4df-09f17e389b52-operator-scripts\") pod \"37f74f3d-e81b-445f-b4df-09f17e389b52\" (UID: \"37f74f3d-e81b-445f-b4df-09f17e389b52\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.291956 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7590e82-ed2d-42d4-ae30-b581dc4517b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.308456 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f74f3d-e81b-445f-b4df-09f17e389b52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37f74f3d-e81b-445f-b4df-09f17e389b52" (UID: "37f74f3d-e81b-445f-b4df-09f17e389b52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.337552 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f74f3d-e81b-445f-b4df-09f17e389b52-kube-api-access-r6p9n" (OuterVolumeSpecName: "kube-api-access-r6p9n") pod "37f74f3d-e81b-445f-b4df-09f17e389b52" (UID: "37f74f3d-e81b-445f-b4df-09f17e389b52"). InnerVolumeSpecName "kube-api-access-r6p9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.351896 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7590e82-ed2d-42d4-ae30-b581dc4517b9-kube-api-access-slggh" (OuterVolumeSpecName: "kube-api-access-slggh") pod "c7590e82-ed2d-42d4-ae30-b581dc4517b9" (UID: "c7590e82-ed2d-42d4-ae30-b581dc4517b9"). InnerVolumeSpecName "kube-api-access-slggh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.357645 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jtd4s" event={"ID":"d9d5f400-fe07-4f0f-ae45-b6055e4908fc","Type":"ContainerDied","Data":"e58d7c9fb24bf46e14a38d993c8837fa703d0460967ab94ec3e49b5dc1bd550a"} Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.357731 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58d7c9fb24bf46e14a38d993c8837fa703d0460967ab94ec3e49b5dc1bd550a" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.391382 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.395094 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f74f3d-e81b-445f-b4df-09f17e389b52-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.395112 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slggh\" (UniqueName: \"kubernetes.io/projected/c7590e82-ed2d-42d4-ae30-b581dc4517b9-kube-api-access-slggh\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.395124 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6p9n\" (UniqueName: \"kubernetes.io/projected/37f74f3d-e81b-445f-b4df-09f17e389b52-kube-api-access-r6p9n\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.424240 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" event={"ID":"37f74f3d-e81b-445f-b4df-09f17e389b52","Type":"ContainerDied","Data":"58c0b766c1a559f99133c9a92a4936a42cd0c9af8dd4c53440138b90d520141a"} Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.424239 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c9a4-account-create-update-dwcqz" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.424309 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c0b766c1a559f99133c9a92a4936a42cd0c9af8dd4c53440138b90d520141a" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.474743 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2711-account-create-update-b2w2h" event={"ID":"e5720290-7d84-4d00-bf6a-8665ccc9cd09","Type":"ContainerDied","Data":"dd4900b18cda2999a01219bee28f383681a0ddcb98117cb69840639f12e83b13"} Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.474825 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4900b18cda2999a01219bee28f383681a0ddcb98117cb69840639f12e83b13" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.475000 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.475796 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2711-account-create-update-b2w2h" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.497307 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aea23e96-8b0c-413c-9240-80f8ecd2af01-operator-scripts\") pod \"aea23e96-8b0c-413c-9240-80f8ecd2af01\" (UID: \"aea23e96-8b0c-413c-9240-80f8ecd2af01\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.497788 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87tpk\" (UniqueName: \"kubernetes.io/projected/aea23e96-8b0c-413c-9240-80f8ecd2af01-kube-api-access-87tpk\") pod \"aea23e96-8b0c-413c-9240-80f8ecd2af01\" (UID: \"aea23e96-8b0c-413c-9240-80f8ecd2af01\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.501872 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea23e96-8b0c-413c-9240-80f8ecd2af01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aea23e96-8b0c-413c-9240-80f8ecd2af01" (UID: "aea23e96-8b0c-413c-9240-80f8ecd2af01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.513288 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-76xc2" event={"ID":"aea23e96-8b0c-413c-9240-80f8ecd2af01","Type":"ContainerDied","Data":"c4e31e0d52188af6338662841c9fdae79c68517603ac1c722e18d74b199bbf5a"} Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.513444 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4e31e0d52188af6338662841c9fdae79c68517603ac1c722e18d74b199bbf5a" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.513651 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-76xc2" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.523247 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" event={"ID":"ec906d2b-9805-4e9b-8273-80a3488c76e5","Type":"ContainerDied","Data":"e54364f5c9cc949d925bde2121b802654e3cff6550525f0e22feff50a494da90"} Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.523307 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e54364f5c9cc949d925bde2121b802654e3cff6550525f0e22feff50a494da90" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.523420 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e0d1-account-create-update-7ppfv" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.535015 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea23e96-8b0c-413c-9240-80f8ecd2af01-kube-api-access-87tpk" (OuterVolumeSpecName: "kube-api-access-87tpk") pod "aea23e96-8b0c-413c-9240-80f8ecd2af01" (UID: "aea23e96-8b0c-413c-9240-80f8ecd2af01"). InnerVolumeSpecName "kube-api-access-87tpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.535232 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6mk6z" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.537814 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6mk6z" event={"ID":"c7590e82-ed2d-42d4-ae30-b581dc4517b9","Type":"ContainerDied","Data":"1684faab425888d07ca8dbb9df3ebf53d3750d491519987281d6535e4c1270a3"} Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.541954 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1684faab425888d07ca8dbb9df3ebf53d3750d491519987281d6535e4c1270a3" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.600463 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dww5z\" (UniqueName: \"kubernetes.io/projected/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-kube-api-access-dww5z\") pod \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\" (UID: \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.600539 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-operator-scripts\") pod \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\" (UID: \"d9d5f400-fe07-4f0f-ae45-b6055e4908fc\") " Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.601026 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aea23e96-8b0c-413c-9240-80f8ecd2af01-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.601046 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87tpk\" (UniqueName: \"kubernetes.io/projected/aea23e96-8b0c-413c-9240-80f8ecd2af01-kube-api-access-87tpk\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.601760 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9d5f400-fe07-4f0f-ae45-b6055e4908fc" (UID: "d9d5f400-fe07-4f0f-ae45-b6055e4908fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.607916 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-kube-api-access-dww5z" (OuterVolumeSpecName: "kube-api-access-dww5z") pod "d9d5f400-fe07-4f0f-ae45-b6055e4908fc" (UID: "d9d5f400-fe07-4f0f-ae45-b6055e4908fc"). InnerVolumeSpecName "kube-api-access-dww5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.704077 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dww5z\" (UniqueName: \"kubernetes.io/projected/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-kube-api-access-dww5z\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.704122 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9d5f400-fe07-4f0f-ae45-b6055e4908fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.819500 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22" path="/var/lib/kubelet/pods/7a8ebaf1-c3b4-4858-a19c-9c2a53fe5e22/volumes" Nov 29 07:27:47 crc kubenswrapper[4731]: I1129 07:27:47.855681 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:48 crc kubenswrapper[4731]: I1129 07:27:48.548578 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jtd4s" Nov 29 07:27:48 crc kubenswrapper[4731]: I1129 07:27:48.548586 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerStarted","Data":"065a724df1c26a30f808b62ac763cfc40de10aaa4b63f34452cec1728a6190f4"} Nov 29 07:27:49 crc kubenswrapper[4731]: I1129 07:27:49.562225 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerStarted","Data":"1b76ca2cb6e3b39c02933a1a20868bde95f32fe49d3d4a90e7ae4be6941ef0a3"} Nov 29 07:27:50 crc kubenswrapper[4731]: I1129 07:27:50.600675 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerStarted","Data":"85655ddc65f39651599468b3d64267d946b66f3d1b623f46ccded2b956c6e47e"} Nov 29 07:27:51 crc kubenswrapper[4731]: I1129 07:27:51.614932 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerStarted","Data":"b3a526cb60f0ae4b1e60033b6e8d52454b200646b4d48148d5083b41926c124c"} Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.745924 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ncqzw"] Nov 29 07:27:52 crc kubenswrapper[4731]: E1129 07:27:52.747105 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec906d2b-9805-4e9b-8273-80a3488c76e5" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747124 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec906d2b-9805-4e9b-8273-80a3488c76e5" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: E1129 07:27:52.747144 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d5f400-fe07-4f0f-ae45-b6055e4908fc" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747150 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d5f400-fe07-4f0f-ae45-b6055e4908fc" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: E1129 07:27:52.747168 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7590e82-ed2d-42d4-ae30-b581dc4517b9" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747174 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7590e82-ed2d-42d4-ae30-b581dc4517b9" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: E1129 07:27:52.747191 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5720290-7d84-4d00-bf6a-8665ccc9cd09" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747198 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5720290-7d84-4d00-bf6a-8665ccc9cd09" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: E1129 07:27:52.747211 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f74f3d-e81b-445f-b4df-09f17e389b52" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747218 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f74f3d-e81b-445f-b4df-09f17e389b52" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: E1129 07:27:52.747231 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea23e96-8b0c-413c-9240-80f8ecd2af01" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747238 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea23e96-8b0c-413c-9240-80f8ecd2af01" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747592 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d5f400-fe07-4f0f-ae45-b6055e4908fc" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747610 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5720290-7d84-4d00-bf6a-8665ccc9cd09" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747618 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec906d2b-9805-4e9b-8273-80a3488c76e5" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747632 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea23e96-8b0c-413c-9240-80f8ecd2af01" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747643 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7590e82-ed2d-42d4-ae30-b581dc4517b9" containerName="mariadb-database-create" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.747661 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f74f3d-e81b-445f-b4df-09f17e389b52" containerName="mariadb-account-create-update" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.748416 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.751112 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.751350 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.751583 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mjgkm" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.772673 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ncqzw"] Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.873851 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-scripts\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.873946 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.873971 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brrh2\" (UniqueName: \"kubernetes.io/projected/b8ff79d0-d925-4219-8603-c5af185585f4-kube-api-access-brrh2\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.874045 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-config-data\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.977589 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-scripts\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.977864 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.977923 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brrh2\" (UniqueName: \"kubernetes.io/projected/b8ff79d0-d925-4219-8603-c5af185585f4-kube-api-access-brrh2\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.978026 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-config-data\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.991381 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.993162 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-scripts\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:52 crc kubenswrapper[4731]: I1129 07:27:52.998465 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-config-data\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:53 crc kubenswrapper[4731]: I1129 07:27:53.018117 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brrh2\" (UniqueName: \"kubernetes.io/projected/b8ff79d0-d925-4219-8603-c5af185585f4-kube-api-access-brrh2\") pod \"nova-cell0-conductor-db-sync-ncqzw\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:53 crc kubenswrapper[4731]: I1129 07:27:53.076826 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:27:53 crc kubenswrapper[4731]: I1129 07:27:53.621283 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ncqzw"] Nov 29 07:27:53 crc kubenswrapper[4731]: I1129 07:27:53.640379 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerStarted","Data":"ea91ec1b216c310c171f1619db300ab6ca8365fa23641930601044bc4bff4a06"} Nov 29 07:27:53 crc kubenswrapper[4731]: I1129 07:27:53.640517 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:27:53 crc kubenswrapper[4731]: I1129 07:27:53.642339 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" event={"ID":"b8ff79d0-d925-4219-8603-c5af185585f4","Type":"ContainerStarted","Data":"db7ae12ad3c9780bf8a0c631060125bd35547f6be19ba90203bc434a5e9e07ca"} Nov 29 07:27:53 crc kubenswrapper[4731]: I1129 07:27:53.670557 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.746852926 podStartE2EDuration="7.670526957s" podCreationTimestamp="2025-11-29 07:27:46 +0000 UTC" firstStartedPulling="2025-11-29 07:27:47.861101003 +0000 UTC m=+1306.751462106" lastFinishedPulling="2025-11-29 07:27:52.784775034 +0000 UTC m=+1311.675136137" observedRunningTime="2025-11-29 07:27:53.669182549 +0000 UTC m=+1312.559543662" watchObservedRunningTime="2025-11-29 07:27:53.670526957 +0000 UTC m=+1312.560888060" Nov 29 07:27:55 crc kubenswrapper[4731]: E1129 07:27:55.169075 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad9b3a1d_2698_405e_b94a_45d96efd0400.slice/crio-2ce9effe3d3eb311109fc98cae51a9f7136c2928a5032c0de973c7a0b18d1511\": RecentStats: unable to find data in memory cache]" Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.218983 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.219818 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="ceilometer-central-agent" containerID="cri-o://1b76ca2cb6e3b39c02933a1a20868bde95f32fe49d3d4a90e7ae4be6941ef0a3" gracePeriod=30 Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.220024 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="proxy-httpd" containerID="cri-o://ea91ec1b216c310c171f1619db300ab6ca8365fa23641930601044bc4bff4a06" gracePeriod=30 Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.220070 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="sg-core" containerID="cri-o://b3a526cb60f0ae4b1e60033b6e8d52454b200646b4d48148d5083b41926c124c" gracePeriod=30 Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.220107 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="ceilometer-notification-agent" containerID="cri-o://85655ddc65f39651599468b3d64267d946b66f3d1b623f46ccded2b956c6e47e" gracePeriod=30 Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.697943 4731 generic.go:334] "Generic (PLEG): container finished" podID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerID="ea91ec1b216c310c171f1619db300ab6ca8365fa23641930601044bc4bff4a06" exitCode=0 Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.698502 4731 generic.go:334] "Generic (PLEG): container finished" podID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerID="b3a526cb60f0ae4b1e60033b6e8d52454b200646b4d48148d5083b41926c124c" exitCode=2 Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.698526 4731 generic.go:334] "Generic (PLEG): container finished" podID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerID="85655ddc65f39651599468b3d64267d946b66f3d1b623f46ccded2b956c6e47e" exitCode=0 Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.698013 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerDied","Data":"ea91ec1b216c310c171f1619db300ab6ca8365fa23641930601044bc4bff4a06"} Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.698643 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerDied","Data":"b3a526cb60f0ae4b1e60033b6e8d52454b200646b4d48148d5083b41926c124c"} Nov 29 07:27:57 crc kubenswrapper[4731]: I1129 07:27:57.698687 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerDied","Data":"85655ddc65f39651599468b3d64267d946b66f3d1b623f46ccded2b956c6e47e"} Nov 29 07:27:58 crc kubenswrapper[4731]: I1129 07:27:58.712542 4731 generic.go:334] "Generic (PLEG): container finished" podID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerID="1b76ca2cb6e3b39c02933a1a20868bde95f32fe49d3d4a90e7ae4be6941ef0a3" exitCode=0 Nov 29 07:27:58 crc kubenswrapper[4731]: I1129 07:27:58.714037 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerDied","Data":"1b76ca2cb6e3b39c02933a1a20868bde95f32fe49d3d4a90e7ae4be6941ef0a3"} Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.003271 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.003587 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.003660 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.004545 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f21640b90c6a59e38b7b6b03ed6a9c7b8bee6bb7ce407b62721c202713562725"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.004636 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://f21640b90c6a59e38b7b6b03ed6a9c7b8bee6bb7ce407b62721c202713562725" gracePeriod=600 Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.459957 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.542964 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-scripts\") pod \"a91a845e-a032-4109-91db-3ac60a4dc1a3\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.543087 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-combined-ca-bundle\") pod \"a91a845e-a032-4109-91db-3ac60a4dc1a3\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.543132 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-log-httpd\") pod \"a91a845e-a032-4109-91db-3ac60a4dc1a3\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.543164 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-run-httpd\") pod \"a91a845e-a032-4109-91db-3ac60a4dc1a3\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.543222 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvsjb\" (UniqueName: \"kubernetes.io/projected/a91a845e-a032-4109-91db-3ac60a4dc1a3-kube-api-access-zvsjb\") pod \"a91a845e-a032-4109-91db-3ac60a4dc1a3\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.543260 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-sg-core-conf-yaml\") pod \"a91a845e-a032-4109-91db-3ac60a4dc1a3\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.543349 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-ceilometer-tls-certs\") pod \"a91a845e-a032-4109-91db-3ac60a4dc1a3\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.543430 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-config-data\") pod \"a91a845e-a032-4109-91db-3ac60a4dc1a3\" (UID: \"a91a845e-a032-4109-91db-3ac60a4dc1a3\") " Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.551257 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a91a845e-a032-4109-91db-3ac60a4dc1a3" (UID: "a91a845e-a032-4109-91db-3ac60a4dc1a3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.551622 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a91a845e-a032-4109-91db-3ac60a4dc1a3" (UID: "a91a845e-a032-4109-91db-3ac60a4dc1a3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.552372 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-scripts" (OuterVolumeSpecName: "scripts") pod "a91a845e-a032-4109-91db-3ac60a4dc1a3" (UID: "a91a845e-a032-4109-91db-3ac60a4dc1a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.555413 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91a845e-a032-4109-91db-3ac60a4dc1a3-kube-api-access-zvsjb" (OuterVolumeSpecName: "kube-api-access-zvsjb") pod "a91a845e-a032-4109-91db-3ac60a4dc1a3" (UID: "a91a845e-a032-4109-91db-3ac60a4dc1a3"). InnerVolumeSpecName "kube-api-access-zvsjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.588419 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a91a845e-a032-4109-91db-3ac60a4dc1a3" (UID: "a91a845e-a032-4109-91db-3ac60a4dc1a3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.636425 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a91a845e-a032-4109-91db-3ac60a4dc1a3" (UID: "a91a845e-a032-4109-91db-3ac60a4dc1a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.640174 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a91a845e-a032-4109-91db-3ac60a4dc1a3" (UID: "a91a845e-a032-4109-91db-3ac60a4dc1a3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.646018 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.646065 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.646078 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.646091 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.646109 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a91a845e-a032-4109-91db-3ac60a4dc1a3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.646121 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvsjb\" (UniqueName: \"kubernetes.io/projected/a91a845e-a032-4109-91db-3ac60a4dc1a3-kube-api-access-zvsjb\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.646136 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.671977 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-config-data" (OuterVolumeSpecName: "config-data") pod "a91a845e-a032-4109-91db-3ac60a4dc1a3" (UID: "a91a845e-a032-4109-91db-3ac60a4dc1a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.748014 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91a845e-a032-4109-91db-3ac60a4dc1a3-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.791933 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="f21640b90c6a59e38b7b6b03ed6a9c7b8bee6bb7ce407b62721c202713562725" exitCode=0 Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.792021 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"f21640b90c6a59e38b7b6b03ed6a9c7b8bee6bb7ce407b62721c202713562725"} Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.792059 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92"} Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.792080 4731 scope.go:117] "RemoveContainer" containerID="ffbb4b4de78b7f58bb4f619008eb50ea899385afddcd0542f0d2036acafe5584" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.800187 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a91a845e-a032-4109-91db-3ac60a4dc1a3","Type":"ContainerDied","Data":"065a724df1c26a30f808b62ac763cfc40de10aaa4b63f34452cec1728a6190f4"} Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.800279 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.803048 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" event={"ID":"b8ff79d0-d925-4219-8603-c5af185585f4","Type":"ContainerStarted","Data":"daca30395d264d7b56a34706f762993415c291552b24659c942c7c81696796b9"} Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.857259 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.878532 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.884706 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" podStartSLOduration=2.349666879 podStartE2EDuration="11.884672799s" podCreationTimestamp="2025-11-29 07:27:52 +0000 UTC" firstStartedPulling="2025-11-29 07:27:53.627125099 +0000 UTC m=+1312.517486202" lastFinishedPulling="2025-11-29 07:28:03.162131019 +0000 UTC m=+1322.052492122" observedRunningTime="2025-11-29 07:28:03.870242134 +0000 UTC m=+1322.760603237" watchObservedRunningTime="2025-11-29 07:28:03.884672799 +0000 UTC m=+1322.775033932" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.911351 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:28:03 crc kubenswrapper[4731]: E1129 07:28:03.911938 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="sg-core" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.911963 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="sg-core" Nov 29 07:28:03 crc kubenswrapper[4731]: E1129 07:28:03.911980 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="proxy-httpd" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.911988 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="proxy-httpd" Nov 29 07:28:03 crc kubenswrapper[4731]: E1129 07:28:03.912006 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="ceilometer-central-agent" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.912013 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="ceilometer-central-agent" Nov 29 07:28:03 crc kubenswrapper[4731]: E1129 07:28:03.912027 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="ceilometer-notification-agent" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.912035 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="ceilometer-notification-agent" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.912252 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="ceilometer-central-agent" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.912275 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="sg-core" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.912292 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="ceilometer-notification-agent" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.912303 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" containerName="proxy-httpd" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.914298 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.919773 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.919796 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.922942 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.937809 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.953165 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-scripts\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.953250 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-log-httpd\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.953284 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vpn\" (UniqueName: \"kubernetes.io/projected/12acd036-5b27-4f6c-82f2-9564eabc1906-kube-api-access-j9vpn\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.953492 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.953520 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-config-data\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.953598 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-run-httpd\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.953673 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:03 crc kubenswrapper[4731]: I1129 07:28:03.953720 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.056295 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.056389 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-config-data\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.056416 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-run-httpd\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.056460 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.056485 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.056595 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-scripts\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.056615 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-log-httpd\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.056638 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9vpn\" (UniqueName: \"kubernetes.io/projected/12acd036-5b27-4f6c-82f2-9564eabc1906-kube-api-access-j9vpn\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.058511 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-run-httpd\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.061059 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-log-httpd\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.065269 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.069161 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.069323 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-scripts\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.069359 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.076736 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-config-data\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.081638 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9vpn\" (UniqueName: \"kubernetes.io/projected/12acd036-5b27-4f6c-82f2-9564eabc1906-kube-api-access-j9vpn\") pod \"ceilometer-0\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " pod="openstack/ceilometer-0" Nov 29 07:28:04 crc kubenswrapper[4731]: I1129 07:28:04.265922 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:28:05 crc kubenswrapper[4731]: I1129 07:28:05.020950 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:28:05 crc kubenswrapper[4731]: I1129 07:28:05.818890 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91a845e-a032-4109-91db-3ac60a4dc1a3" path="/var/lib/kubelet/pods/a91a845e-a032-4109-91db-3ac60a4dc1a3/volumes" Nov 29 07:28:05 crc kubenswrapper[4731]: I1129 07:28:05.885098 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerStarted","Data":"dc61490befda3fa2d8d2eb1fae09d294267b4a3b0701c4270ebe80a72b48ea85"} Nov 29 07:28:09 crc kubenswrapper[4731]: I1129 07:28:09.416556 4731 scope.go:117] "RemoveContainer" containerID="ea91ec1b216c310c171f1619db300ab6ca8365fa23641930601044bc4bff4a06" Nov 29 07:28:09 crc kubenswrapper[4731]: I1129 07:28:09.439403 4731 scope.go:117] "RemoveContainer" containerID="b3a526cb60f0ae4b1e60033b6e8d52454b200646b4d48148d5083b41926c124c" Nov 29 07:28:09 crc kubenswrapper[4731]: I1129 07:28:09.463579 4731 scope.go:117] "RemoveContainer" containerID="85655ddc65f39651599468b3d64267d946b66f3d1b623f46ccded2b956c6e47e" Nov 29 07:28:09 crc kubenswrapper[4731]: I1129 07:28:09.486072 4731 scope.go:117] "RemoveContainer" containerID="1b76ca2cb6e3b39c02933a1a20868bde95f32fe49d3d4a90e7ae4be6941ef0a3" Nov 29 07:28:11 crc kubenswrapper[4731]: I1129 07:28:11.958032 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerStarted","Data":"ba0ea076c607d6576394ee0e54dc13306f81b368177f249a8a9e4cfb251a2d68"} Nov 29 07:28:13 crc kubenswrapper[4731]: I1129 07:28:12.995905 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerStarted","Data":"9ddf9dded0077bd3aa6215cdc48416ab5ec6e907f2fe4672d8b003628a243f2d"} Nov 29 07:28:14 crc kubenswrapper[4731]: I1129 07:28:14.010332 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerStarted","Data":"b30ca5e9c96d22ec0fcc5680995f3a27b2f92f6731bf66980e60874bea9a4ed6"} Nov 29 07:28:15 crc kubenswrapper[4731]: I1129 07:28:15.023720 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerStarted","Data":"2841dca3c7d5a640ed6bebd2970c5cd584b8303fb1440dae7140cf3a9a23a460"} Nov 29 07:28:15 crc kubenswrapper[4731]: I1129 07:28:15.024090 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:28:15 crc kubenswrapper[4731]: I1129 07:28:15.059618 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.599708783 podStartE2EDuration="12.059588872s" podCreationTimestamp="2025-11-29 07:28:03 +0000 UTC" firstStartedPulling="2025-11-29 07:28:05.032343356 +0000 UTC m=+1323.922704459" lastFinishedPulling="2025-11-29 07:28:14.492223405 +0000 UTC m=+1333.382584548" observedRunningTime="2025-11-29 07:28:15.048067181 +0000 UTC m=+1333.938428294" watchObservedRunningTime="2025-11-29 07:28:15.059588872 +0000 UTC m=+1333.949949975" Nov 29 07:28:22 crc kubenswrapper[4731]: I1129 07:28:22.102160 4731 generic.go:334] "Generic (PLEG): container finished" podID="b8ff79d0-d925-4219-8603-c5af185585f4" containerID="daca30395d264d7b56a34706f762993415c291552b24659c942c7c81696796b9" exitCode=0 Nov 29 07:28:22 crc kubenswrapper[4731]: I1129 07:28:22.102273 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" event={"ID":"b8ff79d0-d925-4219-8603-c5af185585f4","Type":"ContainerDied","Data":"daca30395d264d7b56a34706f762993415c291552b24659c942c7c81696796b9"} Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.473397 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.562468 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brrh2\" (UniqueName: \"kubernetes.io/projected/b8ff79d0-d925-4219-8603-c5af185585f4-kube-api-access-brrh2\") pod \"b8ff79d0-d925-4219-8603-c5af185585f4\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.562767 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-combined-ca-bundle\") pod \"b8ff79d0-d925-4219-8603-c5af185585f4\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.562924 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-config-data\") pod \"b8ff79d0-d925-4219-8603-c5af185585f4\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.563051 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-scripts\") pod \"b8ff79d0-d925-4219-8603-c5af185585f4\" (UID: \"b8ff79d0-d925-4219-8603-c5af185585f4\") " Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.571158 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ff79d0-d925-4219-8603-c5af185585f4-kube-api-access-brrh2" (OuterVolumeSpecName: "kube-api-access-brrh2") pod "b8ff79d0-d925-4219-8603-c5af185585f4" (UID: "b8ff79d0-d925-4219-8603-c5af185585f4"). InnerVolumeSpecName "kube-api-access-brrh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.571994 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-scripts" (OuterVolumeSpecName: "scripts") pod "b8ff79d0-d925-4219-8603-c5af185585f4" (UID: "b8ff79d0-d925-4219-8603-c5af185585f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.600414 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-config-data" (OuterVolumeSpecName: "config-data") pod "b8ff79d0-d925-4219-8603-c5af185585f4" (UID: "b8ff79d0-d925-4219-8603-c5af185585f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.607641 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8ff79d0-d925-4219-8603-c5af185585f4" (UID: "b8ff79d0-d925-4219-8603-c5af185585f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.665875 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brrh2\" (UniqueName: \"kubernetes.io/projected/b8ff79d0-d925-4219-8603-c5af185585f4-kube-api-access-brrh2\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.665943 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.665960 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:23 crc kubenswrapper[4731]: I1129 07:28:23.665970 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ff79d0-d925-4219-8603-c5af185585f4-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.130341 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" event={"ID":"b8ff79d0-d925-4219-8603-c5af185585f4","Type":"ContainerDied","Data":"db7ae12ad3c9780bf8a0c631060125bd35547f6be19ba90203bc434a5e9e07ca"} Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.130849 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db7ae12ad3c9780bf8a0c631060125bd35547f6be19ba90203bc434a5e9e07ca" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.130508 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ncqzw" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.242715 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:28:24 crc kubenswrapper[4731]: E1129 07:28:24.243236 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ff79d0-d925-4219-8603-c5af185585f4" containerName="nova-cell0-conductor-db-sync" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.243257 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ff79d0-d925-4219-8603-c5af185585f4" containerName="nova-cell0-conductor-db-sync" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.243476 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ff79d0-d925-4219-8603-c5af185585f4" containerName="nova-cell0-conductor-db-sync" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.244327 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.246934 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.255548 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mjgkm" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.256802 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.388373 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv7pw\" (UniqueName: \"kubernetes.io/projected/f76f9f40-3876-408d-80c6-46ae26b7c10a-kube-api-access-zv7pw\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.388480 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76f9f40-3876-408d-80c6-46ae26b7c10a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.388652 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76f9f40-3876-408d-80c6-46ae26b7c10a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.491290 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76f9f40-3876-408d-80c6-46ae26b7c10a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.491426 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv7pw\" (UniqueName: \"kubernetes.io/projected/f76f9f40-3876-408d-80c6-46ae26b7c10a-kube-api-access-zv7pw\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.491474 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76f9f40-3876-408d-80c6-46ae26b7c10a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.498065 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76f9f40-3876-408d-80c6-46ae26b7c10a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.504151 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76f9f40-3876-408d-80c6-46ae26b7c10a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.510050 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv7pw\" (UniqueName: \"kubernetes.io/projected/f76f9f40-3876-408d-80c6-46ae26b7c10a-kube-api-access-zv7pw\") pod \"nova-cell0-conductor-0\" (UID: \"f76f9f40-3876-408d-80c6-46ae26b7c10a\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.564143 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:24 crc kubenswrapper[4731]: I1129 07:28:24.852200 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:28:25 crc kubenswrapper[4731]: I1129 07:28:25.145152 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f76f9f40-3876-408d-80c6-46ae26b7c10a","Type":"ContainerStarted","Data":"00bc2923ce56016c583d3632b6cdbea0c31cab18c6292c5fb69ce2eb31f8efce"} Nov 29 07:28:26 crc kubenswrapper[4731]: I1129 07:28:26.169984 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f76f9f40-3876-408d-80c6-46ae26b7c10a","Type":"ContainerStarted","Data":"89819357f7f4d3708350e72925b616e581ef05ab686126cb34d842768fb65abe"} Nov 29 07:28:26 crc kubenswrapper[4731]: I1129 07:28:26.171324 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:26 crc kubenswrapper[4731]: I1129 07:28:26.196557 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.196536474 podStartE2EDuration="2.196536474s" podCreationTimestamp="2025-11-29 07:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:26.191693585 +0000 UTC m=+1345.082054688" watchObservedRunningTime="2025-11-29 07:28:26.196536474 +0000 UTC m=+1345.086897577" Nov 29 07:28:34 crc kubenswrapper[4731]: I1129 07:28:34.273542 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:28:34 crc kubenswrapper[4731]: I1129 07:28:34.599242 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.163123 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-v6lwg"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.164869 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.170144 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.170476 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.184250 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v6lwg"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.252776 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.252902 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-config-data\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.252980 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6xx9\" (UniqueName: \"kubernetes.io/projected/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-kube-api-access-t6xx9\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.253029 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-scripts\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.349309 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.352903 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.354789 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-config-data\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.354882 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6xx9\" (UniqueName: \"kubernetes.io/projected/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-kube-api-access-t6xx9\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.354929 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-scripts\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.355002 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.358401 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.374837 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-scripts\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.375867 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-config-data\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.380411 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6xx9\" (UniqueName: \"kubernetes.io/projected/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-kube-api-access-t6xx9\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.381326 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v6lwg\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.387995 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.453655 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.456082 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.461204 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.462209 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.462283 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.462305 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6tq\" (UniqueName: \"kubernetes.io/projected/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-kube-api-access-xd6tq\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.468659 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.505331 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.507627 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.508396 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.520595 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.550695 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.565493 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-logs\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.565635 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.565665 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-config-data\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.565699 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.565719 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.565741 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6tq\" (UniqueName: \"kubernetes.io/projected/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-kube-api-access-xd6tq\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.565793 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6sw\" (UniqueName: \"kubernetes.io/projected/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-kube-api-access-jh6sw\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.580050 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.591150 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.625711 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6tq\" (UniqueName: \"kubernetes.io/projected/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-kube-api-access-xd6tq\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.645501 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.649404 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.672098 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.672375 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmb7v\" (UniqueName: \"kubernetes.io/projected/658805c9-72b6-4313-b0d6-0aff821ff88d-kube-api-access-kmb7v\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.672501 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6sw\" (UniqueName: \"kubernetes.io/projected/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-kube-api-access-jh6sw\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.672685 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658805c9-72b6-4313-b0d6-0aff821ff88d-logs\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.673191 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-logs\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.673279 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-config-data\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.673400 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.673557 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-config-data\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.673695 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.675225 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-logs\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.691494 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.696168 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-config-data\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.698545 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.705752 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6sw\" (UniqueName: \"kubernetes.io/projected/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-kube-api-access-jh6sw\") pod \"nova-metadata-0\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.709796 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.712073 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-ttpdp"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.715311 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.730396 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-ttpdp"] Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.745413 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.776924 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-config-data\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777295 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-config\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777320 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777352 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777376 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-svc\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777445 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777487 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777509 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmb7v\" (UniqueName: \"kubernetes.io/projected/658805c9-72b6-4313-b0d6-0aff821ff88d-kube-api-access-kmb7v\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777545 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-config-data\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777669 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658805c9-72b6-4313-b0d6-0aff821ff88d-logs\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777757 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4x5r\" (UniqueName: \"kubernetes.io/projected/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-kube-api-access-c4x5r\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777787 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.777813 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jnmt\" (UniqueName: \"kubernetes.io/projected/06725712-c188-4d76-809b-f3ef6e1ba32f-kube-api-access-5jnmt\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.778716 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658805c9-72b6-4313-b0d6-0aff821ff88d-logs\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.786331 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-config-data\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.796973 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.802239 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmb7v\" (UniqueName: \"kubernetes.io/projected/658805c9-72b6-4313-b0d6-0aff821ff88d-kube-api-access-kmb7v\") pod \"nova-api-0\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " pod="openstack/nova-api-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880109 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-config\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880206 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880295 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-svc\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880360 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880431 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880522 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-config-data\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880747 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4x5r\" (UniqueName: \"kubernetes.io/projected/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-kube-api-access-c4x5r\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880790 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.880842 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jnmt\" (UniqueName: \"kubernetes.io/projected/06725712-c188-4d76-809b-f3ef6e1ba32f-kube-api-access-5jnmt\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.882283 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-config\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.882366 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.883117 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.884463 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.885170 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-svc\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.897773 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-config-data\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.901045 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4x5r\" (UniqueName: \"kubernetes.io/projected/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-kube-api-access-c4x5r\") pod \"dnsmasq-dns-bccf8f775-ttpdp\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.902272 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:35 crc kubenswrapper[4731]: I1129 07:28:35.906123 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jnmt\" (UniqueName: \"kubernetes.io/projected/06725712-c188-4d76-809b-f3ef6e1ba32f-kube-api-access-5jnmt\") pod \"nova-scheduler-0\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.065883 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.080538 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.098350 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.323723 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v6lwg"] Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.632272 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.682098 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6jkj"] Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.684230 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.689095 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.689320 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.704427 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6jkj"] Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.808313 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.824119 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqgbw\" (UniqueName: \"kubernetes.io/projected/737e38bd-78bb-41ef-acce-f65a427d5bd3-kube-api-access-dqgbw\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.824561 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-config-data\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.824734 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.824885 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-scripts\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.926956 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqgbw\" (UniqueName: \"kubernetes.io/projected/737e38bd-78bb-41ef-acce-f65a427d5bd3-kube-api-access-dqgbw\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.927123 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-config-data\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.927206 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.927290 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-scripts\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.952501 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-scripts\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.953421 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-config-data\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.953628 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:36 crc kubenswrapper[4731]: I1129 07:28:36.965659 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqgbw\" (UniqueName: \"kubernetes.io/projected/737e38bd-78bb-41ef-acce-f65a427d5bd3-kube-api-access-dqgbw\") pod \"nova-cell1-conductor-db-sync-f6jkj\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.022099 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.037422 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.037989 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.058750 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-ttpdp"] Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.337455 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v6lwg" event={"ID":"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8","Type":"ContainerStarted","Data":"c20e25c36d08973cfe616db90263e03602a800b953e23d7d34ea60864b79220d"} Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.337917 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v6lwg" event={"ID":"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8","Type":"ContainerStarted","Data":"e45fb8e4c1c492db07ac00a7b3f5a7c07a88541533387d1d56951c391e09539a"} Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.344804 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e","Type":"ContainerStarted","Data":"39a68ed66a45064b42886d873296dab53e7695ee29fb0ebf180888957cebcc11"} Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.349128 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658805c9-72b6-4313-b0d6-0aff821ff88d","Type":"ContainerStarted","Data":"a189d762a0ee86ca7b661b8bf695c66ed21fc168bdc882379bdfd9d51767e0f7"} Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.352079 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"06725712-c188-4d76-809b-f3ef6e1ba32f","Type":"ContainerStarted","Data":"f9fb02e483b3e23ef79b7fdd2af332f7441ea4a79b3d3ebdcb3d8c1847d0148f"} Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.364786 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c","Type":"ContainerStarted","Data":"c327a8b83e43d8ebf07f878cec48b83802f8fe040c8a4e0045a230b2c1a29912"} Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.371232 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-v6lwg" podStartSLOduration=2.37119833 podStartE2EDuration="2.37119833s" podCreationTimestamp="2025-11-29 07:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:37.364845548 +0000 UTC m=+1356.255206671" watchObservedRunningTime="2025-11-29 07:28:37.37119833 +0000 UTC m=+1356.261559453" Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.377028 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" event={"ID":"d118e0e2-213b-451a-9de7-0e3af1d1bc1a","Type":"ContainerStarted","Data":"17d5805bcbe609f0d40b5e1a09246b6369a84f78bc90037eec5020b50aaf2068"} Nov 29 07:28:37 crc kubenswrapper[4731]: I1129 07:28:37.694525 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6jkj"] Nov 29 07:28:37 crc kubenswrapper[4731]: W1129 07:28:37.742278 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod737e38bd_78bb_41ef_acce_f65a427d5bd3.slice/crio-b652fff462504d37a8edddece85ca24fb7ac5bdb335b1c9f4a0edeb8dd95794d WatchSource:0}: Error finding container b652fff462504d37a8edddece85ca24fb7ac5bdb335b1c9f4a0edeb8dd95794d: Status 404 returned error can't find the container with id b652fff462504d37a8edddece85ca24fb7ac5bdb335b1c9f4a0edeb8dd95794d Nov 29 07:28:38 crc kubenswrapper[4731]: I1129 07:28:38.406199 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" event={"ID":"737e38bd-78bb-41ef-acce-f65a427d5bd3","Type":"ContainerStarted","Data":"ffb1b05858464ce19de4cf58d7c628a91cf5c4e1cf012ad715006a4a03dd8fde"} Nov 29 07:28:38 crc kubenswrapper[4731]: I1129 07:28:38.406645 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" event={"ID":"737e38bd-78bb-41ef-acce-f65a427d5bd3","Type":"ContainerStarted","Data":"b652fff462504d37a8edddece85ca24fb7ac5bdb335b1c9f4a0edeb8dd95794d"} Nov 29 07:28:38 crc kubenswrapper[4731]: I1129 07:28:38.416432 4731 generic.go:334] "Generic (PLEG): container finished" podID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" containerID="7c4850579a2b51d41c122bf876eed5de47b2fd2404bfd62ff31a45e878998c2c" exitCode=0 Nov 29 07:28:38 crc kubenswrapper[4731]: I1129 07:28:38.418157 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" event={"ID":"d118e0e2-213b-451a-9de7-0e3af1d1bc1a","Type":"ContainerDied","Data":"7c4850579a2b51d41c122bf876eed5de47b2fd2404bfd62ff31a45e878998c2c"} Nov 29 07:28:38 crc kubenswrapper[4731]: I1129 07:28:38.455339 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" podStartSLOduration=2.455309978 podStartE2EDuration="2.455309978s" podCreationTimestamp="2025-11-29 07:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:38.43066558 +0000 UTC m=+1357.321026683" watchObservedRunningTime="2025-11-29 07:28:38.455309978 +0000 UTC m=+1357.345671081" Nov 29 07:28:39 crc kubenswrapper[4731]: I1129 07:28:39.053890 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:28:39 crc kubenswrapper[4731]: I1129 07:28:39.067436 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.482959 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e","Type":"ContainerStarted","Data":"93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75"} Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.483354 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75" gracePeriod=30 Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.489333 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658805c9-72b6-4313-b0d6-0aff821ff88d","Type":"ContainerStarted","Data":"5c264731f5cf46f13962a57f07cc031f116eb12e030e8df85ba33ca3a778cd28"} Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.489423 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658805c9-72b6-4313-b0d6-0aff821ff88d","Type":"ContainerStarted","Data":"b46291618cdcb0abdd8065c13d818708b0273aaba72c818dfadd7baf39ddbb17"} Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.494014 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"06725712-c188-4d76-809b-f3ef6e1ba32f","Type":"ContainerStarted","Data":"bef65603843ce1608eca89ef0e01614468f8947009e8acc57409db60c4b0ee29"} Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.498047 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c","Type":"ContainerStarted","Data":"83ffdfd97ef20bb3f8ebbcc1b501064a2c406a72ac1669c92be49ea127b6f37b"} Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.498090 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c","Type":"ContainerStarted","Data":"5238519b0008a1b9a53e6a27238611a1aaeb182607d4ef0c6a5107b68d2b92e8"} Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.498263 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerName="nova-metadata-log" containerID="cri-o://5238519b0008a1b9a53e6a27238611a1aaeb182607d4ef0c6a5107b68d2b92e8" gracePeriod=30 Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.498659 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerName="nova-metadata-metadata" containerID="cri-o://83ffdfd97ef20bb3f8ebbcc1b501064a2c406a72ac1669c92be49ea127b6f37b" gracePeriod=30 Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.502623 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.710822582 podStartE2EDuration="7.502607111s" podCreationTimestamp="2025-11-29 07:28:35 +0000 UTC" firstStartedPulling="2025-11-29 07:28:36.809813955 +0000 UTC m=+1355.700175058" lastFinishedPulling="2025-11-29 07:28:40.601598484 +0000 UTC m=+1359.491959587" observedRunningTime="2025-11-29 07:28:42.501310534 +0000 UTC m=+1361.391671637" watchObservedRunningTime="2025-11-29 07:28:42.502607111 +0000 UTC m=+1361.392968224" Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.507150 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" event={"ID":"d118e0e2-213b-451a-9de7-0e3af1d1bc1a","Type":"ContainerStarted","Data":"3aca0a39151d985e6a4766b40fc87d9962833f53c34f211489d17aeef3dd42bf"} Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.507620 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.533000 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.010735021 podStartE2EDuration="7.532968918s" podCreationTimestamp="2025-11-29 07:28:35 +0000 UTC" firstStartedPulling="2025-11-29 07:28:37.079503751 +0000 UTC m=+1355.969864854" lastFinishedPulling="2025-11-29 07:28:40.601737648 +0000 UTC m=+1359.492098751" observedRunningTime="2025-11-29 07:28:42.52182113 +0000 UTC m=+1361.412182233" watchObservedRunningTime="2025-11-29 07:28:42.532968918 +0000 UTC m=+1361.423330021" Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.552793 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.614253465 podStartE2EDuration="7.552767824s" podCreationTimestamp="2025-11-29 07:28:35 +0000 UTC" firstStartedPulling="2025-11-29 07:28:36.673216647 +0000 UTC m=+1355.563577750" lastFinishedPulling="2025-11-29 07:28:40.611731016 +0000 UTC m=+1359.502092109" observedRunningTime="2025-11-29 07:28:42.54702929 +0000 UTC m=+1361.437390383" watchObservedRunningTime="2025-11-29 07:28:42.552767824 +0000 UTC m=+1361.443128927" Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.571785 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.976560231 podStartE2EDuration="7.571758907s" podCreationTimestamp="2025-11-29 07:28:35 +0000 UTC" firstStartedPulling="2025-11-29 07:28:37.006973905 +0000 UTC m=+1355.897335008" lastFinishedPulling="2025-11-29 07:28:40.602172581 +0000 UTC m=+1359.492533684" observedRunningTime="2025-11-29 07:28:42.570502301 +0000 UTC m=+1361.460863404" watchObservedRunningTime="2025-11-29 07:28:42.571758907 +0000 UTC m=+1361.462120010" Nov 29 07:28:42 crc kubenswrapper[4731]: I1129 07:28:42.603244 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" podStartSLOduration=7.603213186 podStartE2EDuration="7.603213186s" podCreationTimestamp="2025-11-29 07:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:42.595153885 +0000 UTC m=+1361.485514988" watchObservedRunningTime="2025-11-29 07:28:42.603213186 +0000 UTC m=+1361.493574289" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.537763 4731 generic.go:334] "Generic (PLEG): container finished" podID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerID="83ffdfd97ef20bb3f8ebbcc1b501064a2c406a72ac1669c92be49ea127b6f37b" exitCode=0 Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.538226 4731 generic.go:334] "Generic (PLEG): container finished" podID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerID="5238519b0008a1b9a53e6a27238611a1aaeb182607d4ef0c6a5107b68d2b92e8" exitCode=143 Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.537875 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c","Type":"ContainerDied","Data":"83ffdfd97ef20bb3f8ebbcc1b501064a2c406a72ac1669c92be49ea127b6f37b"} Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.540201 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c","Type":"ContainerDied","Data":"5238519b0008a1b9a53e6a27238611a1aaeb182607d4ef0c6a5107b68d2b92e8"} Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.696136 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.866441 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh6sw\" (UniqueName: \"kubernetes.io/projected/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-kube-api-access-jh6sw\") pod \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.866633 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-config-data\") pod \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.866756 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-logs\") pod \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.866992 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-combined-ca-bundle\") pod \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\" (UID: \"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c\") " Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.867353 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-logs" (OuterVolumeSpecName: "logs") pod "47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" (UID: "47b2aa31-1cd7-4682-a873-6bb35d1a7d6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.868138 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.880857 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-kube-api-access-jh6sw" (OuterVolumeSpecName: "kube-api-access-jh6sw") pod "47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" (UID: "47b2aa31-1cd7-4682-a873-6bb35d1a7d6c"). InnerVolumeSpecName "kube-api-access-jh6sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.900104 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-config-data" (OuterVolumeSpecName: "config-data") pod "47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" (UID: "47b2aa31-1cd7-4682-a873-6bb35d1a7d6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.904763 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" (UID: "47b2aa31-1cd7-4682-a873-6bb35d1a7d6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.970373 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.970414 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:43 crc kubenswrapper[4731]: I1129 07:28:43.970428 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh6sw\" (UniqueName: \"kubernetes.io/projected/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c-kube-api-access-jh6sw\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.555544 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47b2aa31-1cd7-4682-a873-6bb35d1a7d6c","Type":"ContainerDied","Data":"c327a8b83e43d8ebf07f878cec48b83802f8fe040c8a4e0045a230b2c1a29912"} Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.555658 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.556234 4731 scope.go:117] "RemoveContainer" containerID="83ffdfd97ef20bb3f8ebbcc1b501064a2c406a72ac1669c92be49ea127b6f37b" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.597968 4731 scope.go:117] "RemoveContainer" containerID="5238519b0008a1b9a53e6a27238611a1aaeb182607d4ef0c6a5107b68d2b92e8" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.610363 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.624095 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.637717 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:44 crc kubenswrapper[4731]: E1129 07:28:44.638230 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerName="nova-metadata-log" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.638247 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerName="nova-metadata-log" Nov 29 07:28:44 crc kubenswrapper[4731]: E1129 07:28:44.638309 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerName="nova-metadata-metadata" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.638315 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerName="nova-metadata-metadata" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.638527 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerName="nova-metadata-metadata" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.638549 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" containerName="nova-metadata-log" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.642978 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.646111 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.651980 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.664509 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.793774 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphtm\" (UniqueName: \"kubernetes.io/projected/537f4a14-bb1f-49e8-b260-95304eabd96a-kube-api-access-kphtm\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.793858 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.793922 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-config-data\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.793972 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.794062 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537f4a14-bb1f-49e8-b260-95304eabd96a-logs\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.896231 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.896335 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537f4a14-bb1f-49e8-b260-95304eabd96a-logs\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.896412 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphtm\" (UniqueName: \"kubernetes.io/projected/537f4a14-bb1f-49e8-b260-95304eabd96a-kube-api-access-kphtm\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.896444 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.896486 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-config-data\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.897752 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537f4a14-bb1f-49e8-b260-95304eabd96a-logs\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.904526 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.904580 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.905816 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-config-data\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.915056 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphtm\" (UniqueName: \"kubernetes.io/projected/537f4a14-bb1f-49e8-b260-95304eabd96a-kube-api-access-kphtm\") pod \"nova-metadata-0\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " pod="openstack/nova-metadata-0" Nov 29 07:28:44 crc kubenswrapper[4731]: I1129 07:28:44.966988 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:45 crc kubenswrapper[4731]: I1129 07:28:45.456374 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:45 crc kubenswrapper[4731]: I1129 07:28:45.569519 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"537f4a14-bb1f-49e8-b260-95304eabd96a","Type":"ContainerStarted","Data":"2975398182e316d455557de7b99e193e3cbaf074f88464da466f8d7787764ec0"} Nov 29 07:28:45 crc kubenswrapper[4731]: I1129 07:28:45.710777 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:28:45 crc kubenswrapper[4731]: I1129 07:28:45.820092 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b2aa31-1cd7-4682-a873-6bb35d1a7d6c" path="/var/lib/kubelet/pods/47b2aa31-1cd7-4682-a873-6bb35d1a7d6c/volumes" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.066205 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.066462 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.081584 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.081692 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.101587 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.124996 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.207813 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-ljnvk"] Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.208674 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" podUID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" containerName="dnsmasq-dns" containerID="cri-o://6d0cb6056364a1288593f2cbeece05032788ee6a74c0eb18185296a1f26d2934" gracePeriod=10 Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.587776 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"537f4a14-bb1f-49e8-b260-95304eabd96a","Type":"ContainerStarted","Data":"5a9e1503fb570a55169a59974353d61e688b2ab2b8b4f2f379846367d5ac3e8a"} Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.587892 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"537f4a14-bb1f-49e8-b260-95304eabd96a","Type":"ContainerStarted","Data":"0cabeeba6346e927edd1afd3b022720d665b038c2a4435ece50f8c94a046d2e0"} Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.596211 4731 generic.go:334] "Generic (PLEG): container finished" podID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" containerID="6d0cb6056364a1288593f2cbeece05032788ee6a74c0eb18185296a1f26d2934" exitCode=0 Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.596300 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" event={"ID":"9eee44fb-eee4-4aa9-9a6f-680039d29c74","Type":"ContainerDied","Data":"6d0cb6056364a1288593f2cbeece05032788ee6a74c0eb18185296a1f26d2934"} Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.599082 4731 generic.go:334] "Generic (PLEG): container finished" podID="abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" containerID="c20e25c36d08973cfe616db90263e03602a800b953e23d7d34ea60864b79220d" exitCode=0 Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.599153 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v6lwg" event={"ID":"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8","Type":"ContainerDied","Data":"c20e25c36d08973cfe616db90263e03602a800b953e23d7d34ea60864b79220d"} Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.655514 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.655466669 podStartE2EDuration="2.655466669s" podCreationTimestamp="2025-11-29 07:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:46.621162709 +0000 UTC m=+1365.511523812" watchObservedRunningTime="2025-11-29 07:28:46.655466669 +0000 UTC m=+1365.545827772" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.661350 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.834516 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.952551 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-swift-storage-0\") pod \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.952632 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-svc\") pod \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.952870 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/9eee44fb-eee4-4aa9-9a6f-680039d29c74-kube-api-access-kplw7\") pod \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.952903 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-nb\") pod \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.952995 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-sb\") pod \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.953072 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-config\") pod \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\" (UID: \"9eee44fb-eee4-4aa9-9a6f-680039d29c74\") " Nov 29 07:28:46 crc kubenswrapper[4731]: I1129 07:28:46.967474 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eee44fb-eee4-4aa9-9a6f-680039d29c74-kube-api-access-kplw7" (OuterVolumeSpecName: "kube-api-access-kplw7") pod "9eee44fb-eee4-4aa9-9a6f-680039d29c74" (UID: "9eee44fb-eee4-4aa9-9a6f-680039d29c74"). InnerVolumeSpecName "kube-api-access-kplw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.025155 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-config" (OuterVolumeSpecName: "config") pod "9eee44fb-eee4-4aa9-9a6f-680039d29c74" (UID: "9eee44fb-eee4-4aa9-9a6f-680039d29c74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.061233 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/9eee44fb-eee4-4aa9-9a6f-680039d29c74-kube-api-access-kplw7\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.061275 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.064259 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9eee44fb-eee4-4aa9-9a6f-680039d29c74" (UID: "9eee44fb-eee4-4aa9-9a6f-680039d29c74"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.067080 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9eee44fb-eee4-4aa9-9a6f-680039d29c74" (UID: "9eee44fb-eee4-4aa9-9a6f-680039d29c74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.081424 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9eee44fb-eee4-4aa9-9a6f-680039d29c74" (UID: "9eee44fb-eee4-4aa9-9a6f-680039d29c74"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.090288 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9eee44fb-eee4-4aa9-9a6f-680039d29c74" (UID: "9eee44fb-eee4-4aa9-9a6f-680039d29c74"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.149166 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.149417 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.163652 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.164054 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.164136 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.164208 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eee44fb-eee4-4aa9-9a6f-680039d29c74-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.612954 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" event={"ID":"9eee44fb-eee4-4aa9-9a6f-680039d29c74","Type":"ContainerDied","Data":"6e1a8f2431f90e0c58899e95c99e81110014270be8eb27e0c575600813eab8ba"} Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.613244 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-ljnvk" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.614900 4731 scope.go:117] "RemoveContainer" containerID="6d0cb6056364a1288593f2cbeece05032788ee6a74c0eb18185296a1f26d2934" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.652352 4731 scope.go:117] "RemoveContainer" containerID="597bbb99bcf87b0ca429cd09e8416acfd698685284d7efcecd488aeb6c696509" Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.673775 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-ljnvk"] Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.702868 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-ljnvk"] Nov 29 07:28:47 crc kubenswrapper[4731]: I1129 07:28:47.827808 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" path="/var/lib/kubelet/pods/9eee44fb-eee4-4aa9-9a6f-680039d29c74/volumes" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.022362 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.191838 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-config-data\") pod \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.192252 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-scripts\") pod \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.192533 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-combined-ca-bundle\") pod \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.192625 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6xx9\" (UniqueName: \"kubernetes.io/projected/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-kube-api-access-t6xx9\") pod \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\" (UID: \"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8\") " Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.198621 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-scripts" (OuterVolumeSpecName: "scripts") pod "abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" (UID: "abd5f3ab-575e-44b6-aa39-c3b5c44d85b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.218902 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-kube-api-access-t6xx9" (OuterVolumeSpecName: "kube-api-access-t6xx9") pod "abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" (UID: "abd5f3ab-575e-44b6-aa39-c3b5c44d85b8"). InnerVolumeSpecName "kube-api-access-t6xx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.234163 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" (UID: "abd5f3ab-575e-44b6-aa39-c3b5c44d85b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.268690 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-config-data" (OuterVolumeSpecName: "config-data") pod "abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" (UID: "abd5f3ab-575e-44b6-aa39-c3b5c44d85b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.295260 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.295299 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.295312 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.295325 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6xx9\" (UniqueName: \"kubernetes.io/projected/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8-kube-api-access-t6xx9\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.628418 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v6lwg" event={"ID":"abd5f3ab-575e-44b6-aa39-c3b5c44d85b8","Type":"ContainerDied","Data":"e45fb8e4c1c492db07ac00a7b3f5a7c07a88541533387d1d56951c391e09539a"} Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.628480 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45fb8e4c1c492db07ac00a7b3f5a7c07a88541533387d1d56951c391e09539a" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.628552 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v6lwg" Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.899417 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.899843 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-log" containerID="cri-o://b46291618cdcb0abdd8065c13d818708b0273aaba72c818dfadd7baf39ddbb17" gracePeriod=30 Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.900734 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-api" containerID="cri-o://5c264731f5cf46f13962a57f07cc031f116eb12e030e8df85ba33ca3a778cd28" gracePeriod=30 Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.967632 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.968845 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="06725712-c188-4d76-809b-f3ef6e1ba32f" containerName="nova-scheduler-scheduler" containerID="cri-o://bef65603843ce1608eca89ef0e01614468f8947009e8acc57409db60c4b0ee29" gracePeriod=30 Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.990613 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.990968 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerName="nova-metadata-log" containerID="cri-o://0cabeeba6346e927edd1afd3b022720d665b038c2a4435ece50f8c94a046d2e0" gracePeriod=30 Nov 29 07:28:48 crc kubenswrapper[4731]: I1129 07:28:48.991621 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerName="nova-metadata-metadata" containerID="cri-o://5a9e1503fb570a55169a59974353d61e688b2ab2b8b4f2f379846367d5ac3e8a" gracePeriod=30 Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.658885 4731 generic.go:334] "Generic (PLEG): container finished" podID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerID="b46291618cdcb0abdd8065c13d818708b0273aaba72c818dfadd7baf39ddbb17" exitCode=143 Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.658954 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658805c9-72b6-4313-b0d6-0aff821ff88d","Type":"ContainerDied","Data":"b46291618cdcb0abdd8065c13d818708b0273aaba72c818dfadd7baf39ddbb17"} Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.673766 4731 generic.go:334] "Generic (PLEG): container finished" podID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerID="5a9e1503fb570a55169a59974353d61e688b2ab2b8b4f2f379846367d5ac3e8a" exitCode=0 Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.673824 4731 generic.go:334] "Generic (PLEG): container finished" podID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerID="0cabeeba6346e927edd1afd3b022720d665b038c2a4435ece50f8c94a046d2e0" exitCode=143 Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.673860 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"537f4a14-bb1f-49e8-b260-95304eabd96a","Type":"ContainerDied","Data":"5a9e1503fb570a55169a59974353d61e688b2ab2b8b4f2f379846367d5ac3e8a"} Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.673952 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"537f4a14-bb1f-49e8-b260-95304eabd96a","Type":"ContainerDied","Data":"0cabeeba6346e927edd1afd3b022720d665b038c2a4435ece50f8c94a046d2e0"} Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.945205 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.962444 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537f4a14-bb1f-49e8-b260-95304eabd96a-logs\") pod \"537f4a14-bb1f-49e8-b260-95304eabd96a\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.962596 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-config-data\") pod \"537f4a14-bb1f-49e8-b260-95304eabd96a\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.962627 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kphtm\" (UniqueName: \"kubernetes.io/projected/537f4a14-bb1f-49e8-b260-95304eabd96a-kube-api-access-kphtm\") pod \"537f4a14-bb1f-49e8-b260-95304eabd96a\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.962862 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-nova-metadata-tls-certs\") pod \"537f4a14-bb1f-49e8-b260-95304eabd96a\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.962896 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-combined-ca-bundle\") pod \"537f4a14-bb1f-49e8-b260-95304eabd96a\" (UID: \"537f4a14-bb1f-49e8-b260-95304eabd96a\") " Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.962896 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/537f4a14-bb1f-49e8-b260-95304eabd96a-logs" (OuterVolumeSpecName: "logs") pod "537f4a14-bb1f-49e8-b260-95304eabd96a" (UID: "537f4a14-bb1f-49e8-b260-95304eabd96a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.963467 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537f4a14-bb1f-49e8-b260-95304eabd96a-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:49 crc kubenswrapper[4731]: I1129 07:28:49.991285 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537f4a14-bb1f-49e8-b260-95304eabd96a-kube-api-access-kphtm" (OuterVolumeSpecName: "kube-api-access-kphtm") pod "537f4a14-bb1f-49e8-b260-95304eabd96a" (UID: "537f4a14-bb1f-49e8-b260-95304eabd96a"). InnerVolumeSpecName "kube-api-access-kphtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.008743 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "537f4a14-bb1f-49e8-b260-95304eabd96a" (UID: "537f4a14-bb1f-49e8-b260-95304eabd96a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.023211 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-config-data" (OuterVolumeSpecName: "config-data") pod "537f4a14-bb1f-49e8-b260-95304eabd96a" (UID: "537f4a14-bb1f-49e8-b260-95304eabd96a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.069354 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.069391 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kphtm\" (UniqueName: \"kubernetes.io/projected/537f4a14-bb1f-49e8-b260-95304eabd96a-kube-api-access-kphtm\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.069403 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.077891 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "537f4a14-bb1f-49e8-b260-95304eabd96a" (UID: "537f4a14-bb1f-49e8-b260-95304eabd96a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.172203 4731 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/537f4a14-bb1f-49e8-b260-95304eabd96a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.704325 4731 generic.go:334] "Generic (PLEG): container finished" podID="06725712-c188-4d76-809b-f3ef6e1ba32f" containerID="bef65603843ce1608eca89ef0e01614468f8947009e8acc57409db60c4b0ee29" exitCode=0 Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.704483 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"06725712-c188-4d76-809b-f3ef6e1ba32f","Type":"ContainerDied","Data":"bef65603843ce1608eca89ef0e01614468f8947009e8acc57409db60c4b0ee29"} Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.704524 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"06725712-c188-4d76-809b-f3ef6e1ba32f","Type":"ContainerDied","Data":"f9fb02e483b3e23ef79b7fdd2af332f7441ea4a79b3d3ebdcb3d8c1847d0148f"} Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.704538 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9fb02e483b3e23ef79b7fdd2af332f7441ea4a79b3d3ebdcb3d8c1847d0148f" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.713437 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"537f4a14-bb1f-49e8-b260-95304eabd96a","Type":"ContainerDied","Data":"2975398182e316d455557de7b99e193e3cbaf074f88464da466f8d7787764ec0"} Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.713504 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.713513 4731 scope.go:117] "RemoveContainer" containerID="5a9e1503fb570a55169a59974353d61e688b2ab2b8b4f2f379846367d5ac3e8a" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.792335 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.793072 4731 scope.go:117] "RemoveContainer" containerID="0cabeeba6346e927edd1afd3b022720d665b038c2a4435ece50f8c94a046d2e0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.815958 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.828130 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.840643 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:50 crc kubenswrapper[4731]: E1129 07:28:50.841250 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" containerName="dnsmasq-dns" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841272 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" containerName="dnsmasq-dns" Nov 29 07:28:50 crc kubenswrapper[4731]: E1129 07:28:50.841287 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" containerName="nova-manage" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841295 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" containerName="nova-manage" Nov 29 07:28:50 crc kubenswrapper[4731]: E1129 07:28:50.841307 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06725712-c188-4d76-809b-f3ef6e1ba32f" containerName="nova-scheduler-scheduler" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841314 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="06725712-c188-4d76-809b-f3ef6e1ba32f" containerName="nova-scheduler-scheduler" Nov 29 07:28:50 crc kubenswrapper[4731]: E1129 07:28:50.841332 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" containerName="init" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841338 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" containerName="init" Nov 29 07:28:50 crc kubenswrapper[4731]: E1129 07:28:50.841348 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerName="nova-metadata-metadata" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841355 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerName="nova-metadata-metadata" Nov 29 07:28:50 crc kubenswrapper[4731]: E1129 07:28:50.841374 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerName="nova-metadata-log" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841381 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerName="nova-metadata-log" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841612 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eee44fb-eee4-4aa9-9a6f-680039d29c74" containerName="dnsmasq-dns" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841646 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="06725712-c188-4d76-809b-f3ef6e1ba32f" containerName="nova-scheduler-scheduler" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841668 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" containerName="nova-manage" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841680 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerName="nova-metadata-log" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.841691 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" containerName="nova-metadata-metadata" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.842982 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.846526 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.846865 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.866181 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.991100 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jnmt\" (UniqueName: \"kubernetes.io/projected/06725712-c188-4d76-809b-f3ef6e1ba32f-kube-api-access-5jnmt\") pod \"06725712-c188-4d76-809b-f3ef6e1ba32f\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.992215 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-config-data\") pod \"06725712-c188-4d76-809b-f3ef6e1ba32f\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.992479 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-combined-ca-bundle\") pod \"06725712-c188-4d76-809b-f3ef6e1ba32f\" (UID: \"06725712-c188-4d76-809b-f3ef6e1ba32f\") " Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.992991 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.993139 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae93ab78-49fb-45cc-b10e-901326d1b1aa-logs\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.993946 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-config-data\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.994120 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj77s\" (UniqueName: \"kubernetes.io/projected/ae93ab78-49fb-45cc-b10e-901326d1b1aa-kube-api-access-fj77s\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.994316 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:50 crc kubenswrapper[4731]: I1129 07:28:50.996245 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06725712-c188-4d76-809b-f3ef6e1ba32f-kube-api-access-5jnmt" (OuterVolumeSpecName: "kube-api-access-5jnmt") pod "06725712-c188-4d76-809b-f3ef6e1ba32f" (UID: "06725712-c188-4d76-809b-f3ef6e1ba32f"). InnerVolumeSpecName "kube-api-access-5jnmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.026826 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-config-data" (OuterVolumeSpecName: "config-data") pod "06725712-c188-4d76-809b-f3ef6e1ba32f" (UID: "06725712-c188-4d76-809b-f3ef6e1ba32f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.030687 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06725712-c188-4d76-809b-f3ef6e1ba32f" (UID: "06725712-c188-4d76-809b-f3ef6e1ba32f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.096134 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj77s\" (UniqueName: \"kubernetes.io/projected/ae93ab78-49fb-45cc-b10e-901326d1b1aa-kube-api-access-fj77s\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.096235 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.096320 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.096380 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae93ab78-49fb-45cc-b10e-901326d1b1aa-logs\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.096413 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-config-data\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.096491 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.096509 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06725712-c188-4d76-809b-f3ef6e1ba32f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.096523 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jnmt\" (UniqueName: \"kubernetes.io/projected/06725712-c188-4d76-809b-f3ef6e1ba32f-kube-api-access-5jnmt\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.097943 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae93ab78-49fb-45cc-b10e-901326d1b1aa-logs\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.102123 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.102704 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.103413 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-config-data\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.119712 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj77s\" (UniqueName: \"kubernetes.io/projected/ae93ab78-49fb-45cc-b10e-901326d1b1aa-kube-api-access-fj77s\") pod \"nova-metadata-0\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.189343 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.687068 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:28:51 crc kubenswrapper[4731]: W1129 07:28:51.690819 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae93ab78_49fb_45cc_b10e_901326d1b1aa.slice/crio-a3fdea9a9fb3f7d5baeb65e7f90375d078ee0d0d626feb2a8a62624db1b70b00 WatchSource:0}: Error finding container a3fdea9a9fb3f7d5baeb65e7f90375d078ee0d0d626feb2a8a62624db1b70b00: Status 404 returned error can't find the container with id a3fdea9a9fb3f7d5baeb65e7f90375d078ee0d0d626feb2a8a62624db1b70b00 Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.727095 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae93ab78-49fb-45cc-b10e-901326d1b1aa","Type":"ContainerStarted","Data":"a3fdea9a9fb3f7d5baeb65e7f90375d078ee0d0d626feb2a8a62624db1b70b00"} Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.733428 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.790804 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.821499 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="537f4a14-bb1f-49e8-b260-95304eabd96a" path="/var/lib/kubelet/pods/537f4a14-bb1f-49e8-b260-95304eabd96a/volumes" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.822394 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.825007 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.826531 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.834337 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:28:51 crc kubenswrapper[4731]: I1129 07:28:51.869388 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.030538 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-config-data\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.030756 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.030825 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwh9\" (UniqueName: \"kubernetes.io/projected/7653d906-63f2-4fce-85ad-84a98160485f-kube-api-access-jmwh9\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.133339 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwh9\" (UniqueName: \"kubernetes.io/projected/7653d906-63f2-4fce-85ad-84a98160485f-kube-api-access-jmwh9\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.133796 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-config-data\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.134006 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.140674 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-config-data\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.140674 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.153694 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwh9\" (UniqueName: \"kubernetes.io/projected/7653d906-63f2-4fce-85ad-84a98160485f-kube-api-access-jmwh9\") pod \"nova-scheduler-0\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.160873 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.711348 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:28:52 crc kubenswrapper[4731]: W1129 07:28:52.713781 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7653d906_63f2_4fce_85ad_84a98160485f.slice/crio-00f607b04e404e9bca7ef1cab503c7438955141f1513885662c5d064498449de WatchSource:0}: Error finding container 00f607b04e404e9bca7ef1cab503c7438955141f1513885662c5d064498449de: Status 404 returned error can't find the container with id 00f607b04e404e9bca7ef1cab503c7438955141f1513885662c5d064498449de Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.751304 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae93ab78-49fb-45cc-b10e-901326d1b1aa","Type":"ContainerStarted","Data":"b3aab4aba0dfb928ea4971b59a6ee1855ee80620ce5d1ae8a6d4c73ee04e659f"} Nov 29 07:28:52 crc kubenswrapper[4731]: I1129 07:28:52.752895 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7653d906-63f2-4fce-85ad-84a98160485f","Type":"ContainerStarted","Data":"00f607b04e404e9bca7ef1cab503c7438955141f1513885662c5d064498449de"} Nov 29 07:28:53 crc kubenswrapper[4731]: I1129 07:28:53.774418 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae93ab78-49fb-45cc-b10e-901326d1b1aa","Type":"ContainerStarted","Data":"1b2097e4711cb6116fac0eb41fe9d23052f08d826052a70be38e3f8bf42ccfc0"} Nov 29 07:28:53 crc kubenswrapper[4731]: I1129 07:28:53.820339 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06725712-c188-4d76-809b-f3ef6e1ba32f" path="/var/lib/kubelet/pods/06725712-c188-4d76-809b-f3ef6e1ba32f/volumes" Nov 29 07:28:54 crc kubenswrapper[4731]: I1129 07:28:54.798829 4731 generic.go:334] "Generic (PLEG): container finished" podID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerID="5c264731f5cf46f13962a57f07cc031f116eb12e030e8df85ba33ca3a778cd28" exitCode=0 Nov 29 07:28:54 crc kubenswrapper[4731]: I1129 07:28:54.799273 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658805c9-72b6-4313-b0d6-0aff821ff88d","Type":"ContainerDied","Data":"5c264731f5cf46f13962a57f07cc031f116eb12e030e8df85ba33ca3a778cd28"} Nov 29 07:28:54 crc kubenswrapper[4731]: I1129 07:28:54.804446 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7653d906-63f2-4fce-85ad-84a98160485f","Type":"ContainerStarted","Data":"74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711"} Nov 29 07:28:54 crc kubenswrapper[4731]: I1129 07:28:54.831649 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.831617862 podStartE2EDuration="4.831617862s" podCreationTimestamp="2025-11-29 07:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:54.824419496 +0000 UTC m=+1373.714780599" watchObservedRunningTime="2025-11-29 07:28:54.831617862 +0000 UTC m=+1373.721978965" Nov 29 07:28:54 crc kubenswrapper[4731]: I1129 07:28:54.856467 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.856438881 podStartE2EDuration="3.856438881s" podCreationTimestamp="2025-11-29 07:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:54.846317382 +0000 UTC m=+1373.736678485" watchObservedRunningTime="2025-11-29 07:28:54.856438881 +0000 UTC m=+1373.746799984" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.007043 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.107344 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmb7v\" (UniqueName: \"kubernetes.io/projected/658805c9-72b6-4313-b0d6-0aff821ff88d-kube-api-access-kmb7v\") pod \"658805c9-72b6-4313-b0d6-0aff821ff88d\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.107423 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658805c9-72b6-4313-b0d6-0aff821ff88d-logs\") pod \"658805c9-72b6-4313-b0d6-0aff821ff88d\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.107468 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-config-data\") pod \"658805c9-72b6-4313-b0d6-0aff821ff88d\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.107846 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-combined-ca-bundle\") pod \"658805c9-72b6-4313-b0d6-0aff821ff88d\" (UID: \"658805c9-72b6-4313-b0d6-0aff821ff88d\") " Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.108068 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/658805c9-72b6-4313-b0d6-0aff821ff88d-logs" (OuterVolumeSpecName: "logs") pod "658805c9-72b6-4313-b0d6-0aff821ff88d" (UID: "658805c9-72b6-4313-b0d6-0aff821ff88d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.108779 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658805c9-72b6-4313-b0d6-0aff821ff88d-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.117062 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658805c9-72b6-4313-b0d6-0aff821ff88d-kube-api-access-kmb7v" (OuterVolumeSpecName: "kube-api-access-kmb7v") pod "658805c9-72b6-4313-b0d6-0aff821ff88d" (UID: "658805c9-72b6-4313-b0d6-0aff821ff88d"). InnerVolumeSpecName "kube-api-access-kmb7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.139539 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-config-data" (OuterVolumeSpecName: "config-data") pod "658805c9-72b6-4313-b0d6-0aff821ff88d" (UID: "658805c9-72b6-4313-b0d6-0aff821ff88d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.144466 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "658805c9-72b6-4313-b0d6-0aff821ff88d" (UID: "658805c9-72b6-4313-b0d6-0aff821ff88d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.210909 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmb7v\" (UniqueName: \"kubernetes.io/projected/658805c9-72b6-4313-b0d6-0aff821ff88d-kube-api-access-kmb7v\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.210964 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.210979 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658805c9-72b6-4313-b0d6-0aff821ff88d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.815172 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.837007 4731 generic.go:334] "Generic (PLEG): container finished" podID="737e38bd-78bb-41ef-acce-f65a427d5bd3" containerID="ffb1b05858464ce19de4cf58d7c628a91cf5c4e1cf012ad715006a4a03dd8fde" exitCode=0 Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.855433 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658805c9-72b6-4313-b0d6-0aff821ff88d","Type":"ContainerDied","Data":"a189d762a0ee86ca7b661b8bf695c66ed21fc168bdc882379bdfd9d51767e0f7"} Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.855848 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" event={"ID":"737e38bd-78bb-41ef-acce-f65a427d5bd3","Type":"ContainerDied","Data":"ffb1b05858464ce19de4cf58d7c628a91cf5c4e1cf012ad715006a4a03dd8fde"} Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.857117 4731 scope.go:117] "RemoveContainer" containerID="5c264731f5cf46f13962a57f07cc031f116eb12e030e8df85ba33ca3a778cd28" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.902977 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.917727 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.918758 4731 scope.go:117] "RemoveContainer" containerID="b46291618cdcb0abdd8065c13d818708b0273aaba72c818dfadd7baf39ddbb17" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.932002 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:55 crc kubenswrapper[4731]: E1129 07:28:55.932517 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-log" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.932546 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-log" Nov 29 07:28:55 crc kubenswrapper[4731]: E1129 07:28:55.932660 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-api" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.932672 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-api" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.933031 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-api" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.933055 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" containerName="nova-api-log" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.935503 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.939135 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:28:55 crc kubenswrapper[4731]: I1129 07:28:55.962100 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.032410 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.032498 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntdz\" (UniqueName: \"kubernetes.io/projected/8e345272-7ce7-4a49-bac2-e85d0f9025cb-kube-api-access-xntdz\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.032744 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-config-data\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.032789 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e345272-7ce7-4a49-bac2-e85d0f9025cb-logs\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.134475 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-config-data\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.134581 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e345272-7ce7-4a49-bac2-e85d0f9025cb-logs\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.134647 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.134678 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntdz\" (UniqueName: \"kubernetes.io/projected/8e345272-7ce7-4a49-bac2-e85d0f9025cb-kube-api-access-xntdz\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.135758 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e345272-7ce7-4a49-bac2-e85d0f9025cb-logs\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.143951 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-config-data\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.148027 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.158489 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntdz\" (UniqueName: \"kubernetes.io/projected/8e345272-7ce7-4a49-bac2-e85d0f9025cb-kube-api-access-xntdz\") pod \"nova-api-0\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.190103 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.192465 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.262842 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.746372 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:28:56 crc kubenswrapper[4731]: W1129 07:28:56.755008 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e345272_7ce7_4a49_bac2_e85d0f9025cb.slice/crio-4d21a8257467029c5af257a9a9d00728dce4a911d45f629dfb2f9026274aa06e WatchSource:0}: Error finding container 4d21a8257467029c5af257a9a9d00728dce4a911d45f629dfb2f9026274aa06e: Status 404 returned error can't find the container with id 4d21a8257467029c5af257a9a9d00728dce4a911d45f629dfb2f9026274aa06e Nov 29 07:28:56 crc kubenswrapper[4731]: I1129 07:28:56.871493 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e345272-7ce7-4a49-bac2-e85d0f9025cb","Type":"ContainerStarted","Data":"4d21a8257467029c5af257a9a9d00728dce4a911d45f629dfb2f9026274aa06e"} Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.161017 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.302256 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.364642 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-scripts\") pod \"737e38bd-78bb-41ef-acce-f65a427d5bd3\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.364710 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqgbw\" (UniqueName: \"kubernetes.io/projected/737e38bd-78bb-41ef-acce-f65a427d5bd3-kube-api-access-dqgbw\") pod \"737e38bd-78bb-41ef-acce-f65a427d5bd3\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.364784 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-config-data\") pod \"737e38bd-78bb-41ef-acce-f65a427d5bd3\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.365180 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-combined-ca-bundle\") pod \"737e38bd-78bb-41ef-acce-f65a427d5bd3\" (UID: \"737e38bd-78bb-41ef-acce-f65a427d5bd3\") " Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.372388 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-scripts" (OuterVolumeSpecName: "scripts") pod "737e38bd-78bb-41ef-acce-f65a427d5bd3" (UID: "737e38bd-78bb-41ef-acce-f65a427d5bd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.379089 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737e38bd-78bb-41ef-acce-f65a427d5bd3-kube-api-access-dqgbw" (OuterVolumeSpecName: "kube-api-access-dqgbw") pod "737e38bd-78bb-41ef-acce-f65a427d5bd3" (UID: "737e38bd-78bb-41ef-acce-f65a427d5bd3"). InnerVolumeSpecName "kube-api-access-dqgbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.412852 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-config-data" (OuterVolumeSpecName: "config-data") pod "737e38bd-78bb-41ef-acce-f65a427d5bd3" (UID: "737e38bd-78bb-41ef-acce-f65a427d5bd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.419010 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "737e38bd-78bb-41ef-acce-f65a427d5bd3" (UID: "737e38bd-78bb-41ef-acce-f65a427d5bd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.467727 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.467769 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.467782 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqgbw\" (UniqueName: \"kubernetes.io/projected/737e38bd-78bb-41ef-acce-f65a427d5bd3-kube-api-access-dqgbw\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.467799 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737e38bd-78bb-41ef-acce-f65a427d5bd3-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.823503 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658805c9-72b6-4313-b0d6-0aff821ff88d" path="/var/lib/kubelet/pods/658805c9-72b6-4313-b0d6-0aff821ff88d/volumes" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.890238 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.890228 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f6jkj" event={"ID":"737e38bd-78bb-41ef-acce-f65a427d5bd3","Type":"ContainerDied","Data":"b652fff462504d37a8edddece85ca24fb7ac5bdb335b1c9f4a0edeb8dd95794d"} Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.890366 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b652fff462504d37a8edddece85ca24fb7ac5bdb335b1c9f4a0edeb8dd95794d" Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.892974 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e345272-7ce7-4a49-bac2-e85d0f9025cb","Type":"ContainerStarted","Data":"4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd"} Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.893015 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e345272-7ce7-4a49-bac2-e85d0f9025cb","Type":"ContainerStarted","Data":"a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea"} Nov 29 07:28:57 crc kubenswrapper[4731]: I1129 07:28:57.932475 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.932435807 podStartE2EDuration="2.932435807s" podCreationTimestamp="2025-11-29 07:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:57.919629121 +0000 UTC m=+1376.809990224" watchObservedRunningTime="2025-11-29 07:28:57.932435807 +0000 UTC m=+1376.822796910" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.016932 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:28:58 crc kubenswrapper[4731]: E1129 07:28:58.018031 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737e38bd-78bb-41ef-acce-f65a427d5bd3" containerName="nova-cell1-conductor-db-sync" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.018053 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="737e38bd-78bb-41ef-acce-f65a427d5bd3" containerName="nova-cell1-conductor-db-sync" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.018353 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="737e38bd-78bb-41ef-acce-f65a427d5bd3" containerName="nova-cell1-conductor-db-sync" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.020501 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.024638 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.030982 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.082242 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.082328 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.082480 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf9dp\" (UniqueName: \"kubernetes.io/projected/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-kube-api-access-jf9dp\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.184483 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.184919 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf9dp\" (UniqueName: \"kubernetes.io/projected/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-kube-api-access-jf9dp\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.185093 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.190919 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.193517 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.208691 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf9dp\" (UniqueName: \"kubernetes.io/projected/162c1c6d-89a4-4eec-bfdb-dd972cd06f0e-kube-api-access-jf9dp\") pod \"nova-cell1-conductor-0\" (UID: \"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.380310 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:58 crc kubenswrapper[4731]: I1129 07:28:58.904081 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:28:59 crc kubenswrapper[4731]: I1129 07:28:59.921934 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e","Type":"ContainerStarted","Data":"155a54946db70578b0e142b4b5771c429d8fbb144cfd5dcf34c61ae17d1f884c"} Nov 29 07:28:59 crc kubenswrapper[4731]: I1129 07:28:59.922807 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 29 07:28:59 crc kubenswrapper[4731]: I1129 07:28:59.922827 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"162c1c6d-89a4-4eec-bfdb-dd972cd06f0e","Type":"ContainerStarted","Data":"cf14966e3b90f38274b16382a75b1d8f32e6eb0a6280531b9d6b8d5647e78d49"} Nov 29 07:28:59 crc kubenswrapper[4731]: I1129 07:28:59.956066 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.956042771 podStartE2EDuration="2.956042771s" podCreationTimestamp="2025-11-29 07:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:59.947195328 +0000 UTC m=+1378.837556441" watchObservedRunningTime="2025-11-29 07:28:59.956042771 +0000 UTC m=+1378.846403874" Nov 29 07:29:01 crc kubenswrapper[4731]: I1129 07:29:01.190068 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:29:01 crc kubenswrapper[4731]: I1129 07:29:01.190446 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:29:02 crc kubenswrapper[4731]: I1129 07:29:02.161458 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 07:29:02 crc kubenswrapper[4731]: I1129 07:29:02.194934 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 07:29:02 crc kubenswrapper[4731]: I1129 07:29:02.200809 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:02 crc kubenswrapper[4731]: I1129 07:29:02.200927 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:02 crc kubenswrapper[4731]: I1129 07:29:02.998859 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 07:29:06 crc kubenswrapper[4731]: I1129 07:29:06.263447 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:29:06 crc kubenswrapper[4731]: I1129 07:29:06.263524 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:29:07 crc kubenswrapper[4731]: I1129 07:29:07.345509 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:07 crc kubenswrapper[4731]: I1129 07:29:07.345933 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:08 crc kubenswrapper[4731]: I1129 07:29:08.427055 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 29 07:29:11 crc kubenswrapper[4731]: I1129 07:29:11.198131 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:29:11 crc kubenswrapper[4731]: I1129 07:29:11.198602 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:29:11 crc kubenswrapper[4731]: I1129 07:29:11.204731 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:29:11 crc kubenswrapper[4731]: I1129 07:29:11.205377 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:29:12 crc kubenswrapper[4731]: I1129 07:29:12.979390 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.086680 4731 generic.go:334] "Generic (PLEG): container finished" podID="6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" containerID="93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75" exitCode=137 Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.086765 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e","Type":"ContainerDied","Data":"93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75"} Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.086835 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.086861 4731 scope.go:117] "RemoveContainer" containerID="93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.086844 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e","Type":"ContainerDied","Data":"39a68ed66a45064b42886d873296dab53e7695ee29fb0ebf180888957cebcc11"} Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.098460 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-config-data\") pod \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.098712 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-combined-ca-bundle\") pod \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.098806 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd6tq\" (UniqueName: \"kubernetes.io/projected/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-kube-api-access-xd6tq\") pod \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\" (UID: \"6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e\") " Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.108152 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-kube-api-access-xd6tq" (OuterVolumeSpecName: "kube-api-access-xd6tq") pod "6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" (UID: "6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e"). InnerVolumeSpecName "kube-api-access-xd6tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.115433 4731 scope.go:117] "RemoveContainer" containerID="93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75" Nov 29 07:29:13 crc kubenswrapper[4731]: E1129 07:29:13.116317 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75\": container with ID starting with 93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75 not found: ID does not exist" containerID="93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.116496 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75"} err="failed to get container status \"93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75\": rpc error: code = NotFound desc = could not find container \"93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75\": container with ID starting with 93e93197f8c2c9d087bd29e8d1e6486a70819eb6eba645d09f67a8a051492f75 not found: ID does not exist" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.127979 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-config-data" (OuterVolumeSpecName: "config-data") pod "6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" (UID: "6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.134760 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" (UID: "6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.202301 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.202629 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.202765 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd6tq\" (UniqueName: \"kubernetes.io/projected/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e-kube-api-access-xd6tq\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.431246 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.453994 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.466442 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:29:13 crc kubenswrapper[4731]: E1129 07:29:13.467101 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.467128 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.467430 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.468675 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.473033 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.473340 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.473330 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.479662 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.627904 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.628768 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.628979 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.629204 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.629403 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7mc\" (UniqueName: \"kubernetes.io/projected/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-kube-api-access-qr7mc\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.731389 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr7mc\" (UniqueName: \"kubernetes.io/projected/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-kube-api-access-qr7mc\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.731489 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.731590 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.731650 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.731702 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.736539 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.737852 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.738330 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.744046 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.751795 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr7mc\" (UniqueName: \"kubernetes.io/projected/6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28-kube-api-access-qr7mc\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.789496 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:13 crc kubenswrapper[4731]: I1129 07:29:13.823579 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e" path="/var/lib/kubelet/pods/6d3a4fc9-88ad-4fb7-ae26-8cc720a3138e/volumes" Nov 29 07:29:14 crc kubenswrapper[4731]: I1129 07:29:14.298776 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:29:15 crc kubenswrapper[4731]: I1129 07:29:15.112454 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28","Type":"ContainerStarted","Data":"97fc923ad77204b8f1d1bb87abf01a19fbde3dbe8ba5bdde2c1fe0b2c8d4238b"} Nov 29 07:29:15 crc kubenswrapper[4731]: I1129 07:29:15.112829 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28","Type":"ContainerStarted","Data":"21d709561120a6c685ae504677f8c6e9faeb08d8c0db808bdc37396ce91600a4"} Nov 29 07:29:15 crc kubenswrapper[4731]: I1129 07:29:15.136700 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.136672888 podStartE2EDuration="2.136672888s" podCreationTimestamp="2025-11-29 07:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:15.134798964 +0000 UTC m=+1394.025160067" watchObservedRunningTime="2025-11-29 07:29:15.136672888 +0000 UTC m=+1394.027033991" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.268786 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.269228 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.269586 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.269643 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.273596 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.276297 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.531338 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-wspqf"] Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.533692 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.547972 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-wspqf"] Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.599388 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.599496 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.599733 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-config\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.599809 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2xw\" (UniqueName: \"kubernetes.io/projected/e5afc5eb-08df-4f82-b357-f1672ff71eaa-kube-api-access-dq2xw\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.599863 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.599943 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.717910 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-config\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.718006 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2xw\" (UniqueName: \"kubernetes.io/projected/e5afc5eb-08df-4f82-b357-f1672ff71eaa-kube-api-access-dq2xw\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.718062 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.718126 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.718249 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.718345 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.720216 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-config\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.720444 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.720466 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.720736 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.720914 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.789378 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2xw\" (UniqueName: \"kubernetes.io/projected/e5afc5eb-08df-4f82-b357-f1672ff71eaa-kube-api-access-dq2xw\") pod \"dnsmasq-dns-cd5cbd7b9-wspqf\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:16 crc kubenswrapper[4731]: I1129 07:29:16.876745 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:17 crc kubenswrapper[4731]: I1129 07:29:17.452279 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-wspqf"] Nov 29 07:29:18 crc kubenswrapper[4731]: I1129 07:29:18.147210 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" event={"ID":"e5afc5eb-08df-4f82-b357-f1672ff71eaa","Type":"ContainerStarted","Data":"2137ecf54d25de4cb61cfdc8925e4dc72b03ee871c4aa170890aa3db752a6de7"} Nov 29 07:29:18 crc kubenswrapper[4731]: I1129 07:29:18.790707 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.161688 4731 generic.go:334] "Generic (PLEG): container finished" podID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" containerID="a9749adf34e60a9a5b473b8cae55bcb9c53e1e9218130e6a72abcc781a667185" exitCode=0 Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.161753 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" event={"ID":"e5afc5eb-08df-4f82-b357-f1672ff71eaa","Type":"ContainerDied","Data":"a9749adf34e60a9a5b473b8cae55bcb9c53e1e9218130e6a72abcc781a667185"} Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.373985 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.374284 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-log" containerID="cri-o://a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea" gracePeriod=30 Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.374420 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-api" containerID="cri-o://4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd" gracePeriod=30 Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.764455 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.765271 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="ceilometer-central-agent" containerID="cri-o://ba0ea076c607d6576394ee0e54dc13306f81b368177f249a8a9e4cfb251a2d68" gracePeriod=30 Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.765357 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="proxy-httpd" containerID="cri-o://2841dca3c7d5a640ed6bebd2970c5cd584b8303fb1440dae7140cf3a9a23a460" gracePeriod=30 Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.765463 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="ceilometer-notification-agent" containerID="cri-o://9ddf9dded0077bd3aa6215cdc48416ab5ec6e907f2fe4672d8b003628a243f2d" gracePeriod=30 Nov 29 07:29:19 crc kubenswrapper[4731]: I1129 07:29:19.765448 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="sg-core" containerID="cri-o://b30ca5e9c96d22ec0fcc5680995f3a27b2f92f6731bf66980e60874bea9a4ed6" gracePeriod=30 Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.174617 4731 generic.go:334] "Generic (PLEG): container finished" podID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerID="a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea" exitCode=143 Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.174684 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e345272-7ce7-4a49-bac2-e85d0f9025cb","Type":"ContainerDied","Data":"a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea"} Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.176941 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" event={"ID":"e5afc5eb-08df-4f82-b357-f1672ff71eaa","Type":"ContainerStarted","Data":"bb2af2da61b08093398ffb704b8165510e5fcde4d9062b23da6917f81738c2c6"} Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.179435 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.184291 4731 generic.go:334] "Generic (PLEG): container finished" podID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerID="2841dca3c7d5a640ed6bebd2970c5cd584b8303fb1440dae7140cf3a9a23a460" exitCode=0 Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.184342 4731 generic.go:334] "Generic (PLEG): container finished" podID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerID="b30ca5e9c96d22ec0fcc5680995f3a27b2f92f6731bf66980e60874bea9a4ed6" exitCode=2 Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.184376 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerDied","Data":"2841dca3c7d5a640ed6bebd2970c5cd584b8303fb1440dae7140cf3a9a23a460"} Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.184419 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerDied","Data":"b30ca5e9c96d22ec0fcc5680995f3a27b2f92f6731bf66980e60874bea9a4ed6"} Nov 29 07:29:20 crc kubenswrapper[4731]: I1129 07:29:20.215236 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" podStartSLOduration=4.215202727 podStartE2EDuration="4.215202727s" podCreationTimestamp="2025-11-29 07:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:20.207859807 +0000 UTC m=+1399.098220920" watchObservedRunningTime="2025-11-29 07:29:20.215202727 +0000 UTC m=+1399.105563830" Nov 29 07:29:21 crc kubenswrapper[4731]: I1129 07:29:21.216095 4731 generic.go:334] "Generic (PLEG): container finished" podID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerID="ba0ea076c607d6576394ee0e54dc13306f81b368177f249a8a9e4cfb251a2d68" exitCode=0 Nov 29 07:29:21 crc kubenswrapper[4731]: I1129 07:29:21.219809 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerDied","Data":"ba0ea076c607d6576394ee0e54dc13306f81b368177f249a8a9e4cfb251a2d68"} Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.029862 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.117807 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xntdz\" (UniqueName: \"kubernetes.io/projected/8e345272-7ce7-4a49-bac2-e85d0f9025cb-kube-api-access-xntdz\") pod \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.118039 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-config-data\") pod \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.118078 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-combined-ca-bundle\") pod \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.118234 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e345272-7ce7-4a49-bac2-e85d0f9025cb-logs\") pod \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\" (UID: \"8e345272-7ce7-4a49-bac2-e85d0f9025cb\") " Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.118733 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e345272-7ce7-4a49-bac2-e85d0f9025cb-logs" (OuterVolumeSpecName: "logs") pod "8e345272-7ce7-4a49-bac2-e85d0f9025cb" (UID: "8e345272-7ce7-4a49-bac2-e85d0f9025cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.119073 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e345272-7ce7-4a49-bac2-e85d0f9025cb-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.143808 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e345272-7ce7-4a49-bac2-e85d0f9025cb-kube-api-access-xntdz" (OuterVolumeSpecName: "kube-api-access-xntdz") pod "8e345272-7ce7-4a49-bac2-e85d0f9025cb" (UID: "8e345272-7ce7-4a49-bac2-e85d0f9025cb"). InnerVolumeSpecName "kube-api-access-xntdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.155322 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e345272-7ce7-4a49-bac2-e85d0f9025cb" (UID: "8e345272-7ce7-4a49-bac2-e85d0f9025cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.156780 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-config-data" (OuterVolumeSpecName: "config-data") pod "8e345272-7ce7-4a49-bac2-e85d0f9025cb" (UID: "8e345272-7ce7-4a49-bac2-e85d0f9025cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.221652 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xntdz\" (UniqueName: \"kubernetes.io/projected/8e345272-7ce7-4a49-bac2-e85d0f9025cb-kube-api-access-xntdz\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.222034 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.222053 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e345272-7ce7-4a49-bac2-e85d0f9025cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.245547 4731 generic.go:334] "Generic (PLEG): container finished" podID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerID="4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd" exitCode=0 Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.245659 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e345272-7ce7-4a49-bac2-e85d0f9025cb","Type":"ContainerDied","Data":"4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd"} Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.245718 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e345272-7ce7-4a49-bac2-e85d0f9025cb","Type":"ContainerDied","Data":"4d21a8257467029c5af257a9a9d00728dce4a911d45f629dfb2f9026274aa06e"} Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.245749 4731 scope.go:117] "RemoveContainer" containerID="4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.245928 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.280586 4731 scope.go:117] "RemoveContainer" containerID="a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.306638 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.328108 4731 scope.go:117] "RemoveContainer" containerID="4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd" Nov 29 07:29:23 crc kubenswrapper[4731]: E1129 07:29:23.329925 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd\": container with ID starting with 4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd not found: ID does not exist" containerID="4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.330018 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd"} err="failed to get container status \"4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd\": rpc error: code = NotFound desc = could not find container \"4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd\": container with ID starting with 4610bce7b7742e49dcf9520bdc49b4e3114c7c667853e3f726af99b6d7c888fd not found: ID does not exist" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.330088 4731 scope.go:117] "RemoveContainer" containerID="a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea" Nov 29 07:29:23 crc kubenswrapper[4731]: E1129 07:29:23.330767 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea\": container with ID starting with a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea not found: ID does not exist" containerID="a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.330798 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea"} err="failed to get container status \"a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea\": rpc error: code = NotFound desc = could not find container \"a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea\": container with ID starting with a067973007da29c03aba5f9032433d93d8168382deffbd3db73d958963cc63ea not found: ID does not exist" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.343137 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.360960 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:23 crc kubenswrapper[4731]: E1129 07:29:23.361694 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-api" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.361725 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-api" Nov 29 07:29:23 crc kubenswrapper[4731]: E1129 07:29:23.361808 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-log" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.361822 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-log" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.362150 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-log" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.362187 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" containerName="nova-api-api" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.363615 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.374481 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.375939 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.376249 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.376461 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.429718 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-config-data\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.429818 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5h8\" (UniqueName: \"kubernetes.io/projected/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-kube-api-access-nr5h8\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.429851 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.429898 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.430068 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-logs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.430097 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.532488 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-logs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.532680 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.532784 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-config-data\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.532887 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr5h8\" (UniqueName: \"kubernetes.io/projected/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-kube-api-access-nr5h8\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.532930 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.532973 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.534723 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-logs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.538868 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.538872 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.540355 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.542907 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-config-data\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.559255 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr5h8\" (UniqueName: \"kubernetes.io/projected/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-kube-api-access-nr5h8\") pod \"nova-api-0\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.705974 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.790369 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.824304 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e345272-7ce7-4a49-bac2-e85d0f9025cb" path="/var/lib/kubelet/pods/8e345272-7ce7-4a49-bac2-e85d0f9025cb/volumes" Nov 29 07:29:23 crc kubenswrapper[4731]: I1129 07:29:23.828747 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.223455 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.260773 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1239e9f1-dce0-421c-abaf-bdd016b6cc2f","Type":"ContainerStarted","Data":"8e01a3c9c6b6fcc5c7aad5546d30970a382cf45b45327c10e15c224d258bb909"} Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.283462 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.529285 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzgp"] Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.531147 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.540585 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.540822 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.542187 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzgp"] Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.705430 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-config-data\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.705656 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.705708 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkrhl\" (UniqueName: \"kubernetes.io/projected/3f635248-2bce-4e96-8d9f-3afd345c442b-kube-api-access-bkrhl\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.706004 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-scripts\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.809620 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.809996 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkrhl\" (UniqueName: \"kubernetes.io/projected/3f635248-2bce-4e96-8d9f-3afd345c442b-kube-api-access-bkrhl\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.810115 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-scripts\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.810247 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-config-data\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.815166 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.816768 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-scripts\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.821817 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-config-data\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.828140 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkrhl\" (UniqueName: \"kubernetes.io/projected/3f635248-2bce-4e96-8d9f-3afd345c442b-kube-api-access-bkrhl\") pod \"nova-cell1-cell-mapping-ngzgp\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:24 crc kubenswrapper[4731]: I1129 07:29:24.852820 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:25 crc kubenswrapper[4731]: I1129 07:29:25.274400 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1239e9f1-dce0-421c-abaf-bdd016b6cc2f","Type":"ContainerStarted","Data":"ccd46802c3c7a393ca2e89d76259d34340d17d9ca85f7873e86d8a062728cbf5"} Nov 29 07:29:25 crc kubenswrapper[4731]: I1129 07:29:25.274932 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1239e9f1-dce0-421c-abaf-bdd016b6cc2f","Type":"ContainerStarted","Data":"b4380389b6837953ced538c553ea23b2564e73417815e6c357383dbae8c87f20"} Nov 29 07:29:25 crc kubenswrapper[4731]: I1129 07:29:25.309319 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.30928454 podStartE2EDuration="2.30928454s" podCreationTimestamp="2025-11-29 07:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:25.301252461 +0000 UTC m=+1404.191613584" watchObservedRunningTime="2025-11-29 07:29:25.30928454 +0000 UTC m=+1404.199645643" Nov 29 07:29:25 crc kubenswrapper[4731]: W1129 07:29:25.356035 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f635248_2bce_4e96_8d9f_3afd345c442b.slice/crio-7b9538ec8c1550fe6b9d69258e708c3f8cc1c926bea77c19a6f1d4a607f11664 WatchSource:0}: Error finding container 7b9538ec8c1550fe6b9d69258e708c3f8cc1c926bea77c19a6f1d4a607f11664: Status 404 returned error can't find the container with id 7b9538ec8c1550fe6b9d69258e708c3f8cc1c926bea77c19a6f1d4a607f11664 Nov 29 07:29:25 crc kubenswrapper[4731]: I1129 07:29:25.363901 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzgp"] Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.360590 4731 generic.go:334] "Generic (PLEG): container finished" podID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerID="9ddf9dded0077bd3aa6215cdc48416ab5ec6e907f2fe4672d8b003628a243f2d" exitCode=0 Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.360968 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerDied","Data":"9ddf9dded0077bd3aa6215cdc48416ab5ec6e907f2fe4672d8b003628a243f2d"} Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.369849 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ngzgp" event={"ID":"3f635248-2bce-4e96-8d9f-3afd345c442b","Type":"ContainerStarted","Data":"d42cf0d674eec20dc17238aae814a510f2d61257f4d9a174010608163b951451"} Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.369922 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ngzgp" event={"ID":"3f635248-2bce-4e96-8d9f-3afd345c442b","Type":"ContainerStarted","Data":"7b9538ec8c1550fe6b9d69258e708c3f8cc1c926bea77c19a6f1d4a607f11664"} Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.403524 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ngzgp" podStartSLOduration=2.403494866 podStartE2EDuration="2.403494866s" podCreationTimestamp="2025-11-29 07:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:26.394325334 +0000 UTC m=+1405.284686437" watchObservedRunningTime="2025-11-29 07:29:26.403494866 +0000 UTC m=+1405.293855969" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.484545 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.656693 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-ceilometer-tls-certs\") pod \"12acd036-5b27-4f6c-82f2-9564eabc1906\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.657828 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-log-httpd\") pod \"12acd036-5b27-4f6c-82f2-9564eabc1906\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.657963 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-config-data\") pod \"12acd036-5b27-4f6c-82f2-9564eabc1906\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.658028 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-sg-core-conf-yaml\") pod \"12acd036-5b27-4f6c-82f2-9564eabc1906\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.658160 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-combined-ca-bundle\") pod \"12acd036-5b27-4f6c-82f2-9564eabc1906\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.658330 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9vpn\" (UniqueName: \"kubernetes.io/projected/12acd036-5b27-4f6c-82f2-9564eabc1906-kube-api-access-j9vpn\") pod \"12acd036-5b27-4f6c-82f2-9564eabc1906\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.658352 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "12acd036-5b27-4f6c-82f2-9564eabc1906" (UID: "12acd036-5b27-4f6c-82f2-9564eabc1906"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.658418 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-run-httpd\") pod \"12acd036-5b27-4f6c-82f2-9564eabc1906\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.658472 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-scripts\") pod \"12acd036-5b27-4f6c-82f2-9564eabc1906\" (UID: \"12acd036-5b27-4f6c-82f2-9564eabc1906\") " Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.658842 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "12acd036-5b27-4f6c-82f2-9564eabc1906" (UID: "12acd036-5b27-4f6c-82f2-9564eabc1906"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.660014 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.660040 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12acd036-5b27-4f6c-82f2-9564eabc1906-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.676941 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-scripts" (OuterVolumeSpecName: "scripts") pod "12acd036-5b27-4f6c-82f2-9564eabc1906" (UID: "12acd036-5b27-4f6c-82f2-9564eabc1906"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.677060 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12acd036-5b27-4f6c-82f2-9564eabc1906-kube-api-access-j9vpn" (OuterVolumeSpecName: "kube-api-access-j9vpn") pod "12acd036-5b27-4f6c-82f2-9564eabc1906" (UID: "12acd036-5b27-4f6c-82f2-9564eabc1906"). InnerVolumeSpecName "kube-api-access-j9vpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.711399 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "12acd036-5b27-4f6c-82f2-9564eabc1906" (UID: "12acd036-5b27-4f6c-82f2-9564eabc1906"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.755530 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12acd036-5b27-4f6c-82f2-9564eabc1906" (UID: "12acd036-5b27-4f6c-82f2-9564eabc1906"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.756556 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "12acd036-5b27-4f6c-82f2-9564eabc1906" (UID: "12acd036-5b27-4f6c-82f2-9564eabc1906"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.766871 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.766934 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9vpn\" (UniqueName: \"kubernetes.io/projected/12acd036-5b27-4f6c-82f2-9564eabc1906-kube-api-access-j9vpn\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.766953 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.766968 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.766982 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.811255 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-config-data" (OuterVolumeSpecName: "config-data") pod "12acd036-5b27-4f6c-82f2-9564eabc1906" (UID: "12acd036-5b27-4f6c-82f2-9564eabc1906"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.870389 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12acd036-5b27-4f6c-82f2-9564eabc1906-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.878865 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.980063 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-ttpdp"] Nov 29 07:29:26 crc kubenswrapper[4731]: I1129 07:29:26.980466 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" podUID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" containerName="dnsmasq-dns" containerID="cri-o://3aca0a39151d985e6a4766b40fc87d9962833f53c34f211489d17aeef3dd42bf" gracePeriod=10 Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.390574 4731 generic.go:334] "Generic (PLEG): container finished" podID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" containerID="3aca0a39151d985e6a4766b40fc87d9962833f53c34f211489d17aeef3dd42bf" exitCode=0 Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.390606 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" event={"ID":"d118e0e2-213b-451a-9de7-0e3af1d1bc1a","Type":"ContainerDied","Data":"3aca0a39151d985e6a4766b40fc87d9962833f53c34f211489d17aeef3dd42bf"} Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.395279 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.397909 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12acd036-5b27-4f6c-82f2-9564eabc1906","Type":"ContainerDied","Data":"dc61490befda3fa2d8d2eb1fae09d294267b4a3b0701c4270ebe80a72b48ea85"} Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.397978 4731 scope.go:117] "RemoveContainer" containerID="2841dca3c7d5a640ed6bebd2970c5cd584b8303fb1440dae7140cf3a9a23a460" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.460754 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.464399 4731 scope.go:117] "RemoveContainer" containerID="b30ca5e9c96d22ec0fcc5680995f3a27b2f92f6731bf66980e60874bea9a4ed6" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.495230 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.518740 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:29:27 crc kubenswrapper[4731]: E1129 07:29:27.519684 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="ceilometer-notification-agent" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.519782 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="ceilometer-notification-agent" Nov 29 07:29:27 crc kubenswrapper[4731]: E1129 07:29:27.519882 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="sg-core" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.519972 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="sg-core" Nov 29 07:29:27 crc kubenswrapper[4731]: E1129 07:29:27.520053 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="ceilometer-central-agent" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.520125 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="ceilometer-central-agent" Nov 29 07:29:27 crc kubenswrapper[4731]: E1129 07:29:27.520215 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="proxy-httpd" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.520280 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="proxy-httpd" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.520640 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="proxy-httpd" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.520730 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="ceilometer-notification-agent" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.520827 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="sg-core" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.520930 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" containerName="ceilometer-central-agent" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.528437 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.538622 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.538879 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.539151 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.542816 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.580881 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.598450 4731 scope.go:117] "RemoveContainer" containerID="9ddf9dded0077bd3aa6215cdc48416ab5ec6e907f2fe4672d8b003628a243f2d" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.637644 4731 scope.go:117] "RemoveContainer" containerID="ba0ea076c607d6576394ee0e54dc13306f81b368177f249a8a9e4cfb251a2d68" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.692943 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-swift-storage-0\") pod \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693025 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-config\") pod \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693058 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-sb\") pod \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693089 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4x5r\" (UniqueName: \"kubernetes.io/projected/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-kube-api-access-c4x5r\") pod \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693135 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-nb\") pod \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693258 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-svc\") pod \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\" (UID: \"d118e0e2-213b-451a-9de7-0e3af1d1bc1a\") " Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693673 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693725 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-scripts\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693763 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-config-data\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693787 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6556ead0-9306-43c4-bf74-52f688285fd5-run-httpd\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693817 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljclf\" (UniqueName: \"kubernetes.io/projected/6556ead0-9306-43c4-bf74-52f688285fd5-kube-api-access-ljclf\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693840 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6556ead0-9306-43c4-bf74-52f688285fd5-log-httpd\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693869 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.693887 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.699820 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-kube-api-access-c4x5r" (OuterVolumeSpecName: "kube-api-access-c4x5r") pod "d118e0e2-213b-451a-9de7-0e3af1d1bc1a" (UID: "d118e0e2-213b-451a-9de7-0e3af1d1bc1a"). InnerVolumeSpecName "kube-api-access-c4x5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.758794 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d118e0e2-213b-451a-9de7-0e3af1d1bc1a" (UID: "d118e0e2-213b-451a-9de7-0e3af1d1bc1a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.762511 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d118e0e2-213b-451a-9de7-0e3af1d1bc1a" (UID: "d118e0e2-213b-451a-9de7-0e3af1d1bc1a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.767362 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d118e0e2-213b-451a-9de7-0e3af1d1bc1a" (UID: "d118e0e2-213b-451a-9de7-0e3af1d1bc1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.775897 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d118e0e2-213b-451a-9de7-0e3af1d1bc1a" (UID: "d118e0e2-213b-451a-9de7-0e3af1d1bc1a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.784380 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-config" (OuterVolumeSpecName: "config") pod "d118e0e2-213b-451a-9de7-0e3af1d1bc1a" (UID: "d118e0e2-213b-451a-9de7-0e3af1d1bc1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796463 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796542 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-scripts\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796601 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-config-data\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796631 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6556ead0-9306-43c4-bf74-52f688285fd5-run-httpd\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796667 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljclf\" (UniqueName: \"kubernetes.io/projected/6556ead0-9306-43c4-bf74-52f688285fd5-kube-api-access-ljclf\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796687 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6556ead0-9306-43c4-bf74-52f688285fd5-log-httpd\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796715 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796737 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796928 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796941 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796954 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796963 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796972 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4x5r\" (UniqueName: \"kubernetes.io/projected/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-kube-api-access-c4x5r\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.796981 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d118e0e2-213b-451a-9de7-0e3af1d1bc1a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.798215 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6556ead0-9306-43c4-bf74-52f688285fd5-run-httpd\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.798483 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6556ead0-9306-43c4-bf74-52f688285fd5-log-httpd\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.805289 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.805286 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-scripts\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.805540 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-config-data\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.805680 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.806013 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6556ead0-9306-43c4-bf74-52f688285fd5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.821939 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljclf\" (UniqueName: \"kubernetes.io/projected/6556ead0-9306-43c4-bf74-52f688285fd5-kube-api-access-ljclf\") pod \"ceilometer-0\" (UID: \"6556ead0-9306-43c4-bf74-52f688285fd5\") " pod="openstack/ceilometer-0" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.841945 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12acd036-5b27-4f6c-82f2-9564eabc1906" path="/var/lib/kubelet/pods/12acd036-5b27-4f6c-82f2-9564eabc1906/volumes" Nov 29 07:29:27 crc kubenswrapper[4731]: I1129 07:29:27.893788 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:29:28 crc kubenswrapper[4731]: I1129 07:29:28.408764 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" event={"ID":"d118e0e2-213b-451a-9de7-0e3af1d1bc1a","Type":"ContainerDied","Data":"17d5805bcbe609f0d40b5e1a09246b6369a84f78bc90037eec5020b50aaf2068"} Nov 29 07:29:28 crc kubenswrapper[4731]: I1129 07:29:28.410291 4731 scope.go:117] "RemoveContainer" containerID="3aca0a39151d985e6a4766b40fc87d9962833f53c34f211489d17aeef3dd42bf" Nov 29 07:29:28 crc kubenswrapper[4731]: I1129 07:29:28.411042 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-ttpdp" Nov 29 07:29:28 crc kubenswrapper[4731]: I1129 07:29:28.452090 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-ttpdp"] Nov 29 07:29:28 crc kubenswrapper[4731]: I1129 07:29:28.454247 4731 scope.go:117] "RemoveContainer" containerID="7c4850579a2b51d41c122bf876eed5de47b2fd2404bfd62ff31a45e878998c2c" Nov 29 07:29:28 crc kubenswrapper[4731]: I1129 07:29:28.464480 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-ttpdp"] Nov 29 07:29:28 crc kubenswrapper[4731]: W1129 07:29:28.526337 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6556ead0_9306_43c4_bf74_52f688285fd5.slice/crio-bb2cd71f24e8e2426c533ecb4130692bd0ef2c93c5ed3f89fed9c8fc599a5587 WatchSource:0}: Error finding container bb2cd71f24e8e2426c533ecb4130692bd0ef2c93c5ed3f89fed9c8fc599a5587: Status 404 returned error can't find the container with id bb2cd71f24e8e2426c533ecb4130692bd0ef2c93c5ed3f89fed9c8fc599a5587 Nov 29 07:29:28 crc kubenswrapper[4731]: I1129 07:29:28.526443 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:29:28 crc kubenswrapper[4731]: I1129 07:29:28.530607 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:29:29 crc kubenswrapper[4731]: I1129 07:29:29.433145 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6556ead0-9306-43c4-bf74-52f688285fd5","Type":"ContainerStarted","Data":"bb2cd71f24e8e2426c533ecb4130692bd0ef2c93c5ed3f89fed9c8fc599a5587"} Nov 29 07:29:29 crc kubenswrapper[4731]: I1129 07:29:29.822045 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" path="/var/lib/kubelet/pods/d118e0e2-213b-451a-9de7-0e3af1d1bc1a/volumes" Nov 29 07:29:30 crc kubenswrapper[4731]: I1129 07:29:30.450251 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6556ead0-9306-43c4-bf74-52f688285fd5","Type":"ContainerStarted","Data":"8f017306ee0a9502b1366e23287d99cf91e10eafcc4e43df582c2df35d7fc004"} Nov 29 07:29:31 crc kubenswrapper[4731]: I1129 07:29:31.499106 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6556ead0-9306-43c4-bf74-52f688285fd5","Type":"ContainerStarted","Data":"38d3a196d0d103b4d589918210e5680be33d1499a15cc9cbdabc1f07f79c6170"} Nov 29 07:29:31 crc kubenswrapper[4731]: I1129 07:29:31.501137 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6556ead0-9306-43c4-bf74-52f688285fd5","Type":"ContainerStarted","Data":"5674dbddb1a362aacaf291de451b27d5e7fdc3d39b0787de446431ab0e4cb952"} Nov 29 07:29:32 crc kubenswrapper[4731]: I1129 07:29:32.513582 4731 generic.go:334] "Generic (PLEG): container finished" podID="3f635248-2bce-4e96-8d9f-3afd345c442b" containerID="d42cf0d674eec20dc17238aae814a510f2d61257f4d9a174010608163b951451" exitCode=0 Nov 29 07:29:32 crc kubenswrapper[4731]: I1129 07:29:32.513703 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ngzgp" event={"ID":"3f635248-2bce-4e96-8d9f-3afd345c442b","Type":"ContainerDied","Data":"d42cf0d674eec20dc17238aae814a510f2d61257f4d9a174010608163b951451"} Nov 29 07:29:33 crc kubenswrapper[4731]: I1129 07:29:33.539684 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6556ead0-9306-43c4-bf74-52f688285fd5","Type":"ContainerStarted","Data":"be86f8af9bc84af1f55857194cb006a3ed99f521b14d018522de52c99e018d62"} Nov 29 07:29:33 crc kubenswrapper[4731]: I1129 07:29:33.540202 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:29:33 crc kubenswrapper[4731]: I1129 07:29:33.567603 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.468904812 podStartE2EDuration="6.567581451s" podCreationTimestamp="2025-11-29 07:29:27 +0000 UTC" firstStartedPulling="2025-11-29 07:29:28.53028774 +0000 UTC m=+1407.420648843" lastFinishedPulling="2025-11-29 07:29:32.628964379 +0000 UTC m=+1411.519325482" observedRunningTime="2025-11-29 07:29:33.564106021 +0000 UTC m=+1412.454467124" watchObservedRunningTime="2025-11-29 07:29:33.567581451 +0000 UTC m=+1412.457942554" Nov 29 07:29:33 crc kubenswrapper[4731]: I1129 07:29:33.707673 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:29:33 crc kubenswrapper[4731]: I1129 07:29:33.707741 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.073802 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.187211 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-combined-ca-bundle\") pod \"3f635248-2bce-4e96-8d9f-3afd345c442b\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.187327 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkrhl\" (UniqueName: \"kubernetes.io/projected/3f635248-2bce-4e96-8d9f-3afd345c442b-kube-api-access-bkrhl\") pod \"3f635248-2bce-4e96-8d9f-3afd345c442b\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.187391 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-scripts\") pod \"3f635248-2bce-4e96-8d9f-3afd345c442b\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.187444 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-config-data\") pod \"3f635248-2bce-4e96-8d9f-3afd345c442b\" (UID: \"3f635248-2bce-4e96-8d9f-3afd345c442b\") " Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.195409 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-scripts" (OuterVolumeSpecName: "scripts") pod "3f635248-2bce-4e96-8d9f-3afd345c442b" (UID: "3f635248-2bce-4e96-8d9f-3afd345c442b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.196173 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f635248-2bce-4e96-8d9f-3afd345c442b-kube-api-access-bkrhl" (OuterVolumeSpecName: "kube-api-access-bkrhl") pod "3f635248-2bce-4e96-8d9f-3afd345c442b" (UID: "3f635248-2bce-4e96-8d9f-3afd345c442b"). InnerVolumeSpecName "kube-api-access-bkrhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.220366 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-config-data" (OuterVolumeSpecName: "config-data") pod "3f635248-2bce-4e96-8d9f-3afd345c442b" (UID: "3f635248-2bce-4e96-8d9f-3afd345c442b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.230887 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f635248-2bce-4e96-8d9f-3afd345c442b" (UID: "3f635248-2bce-4e96-8d9f-3afd345c442b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.290330 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.290380 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkrhl\" (UniqueName: \"kubernetes.io/projected/3f635248-2bce-4e96-8d9f-3afd345c442b-kube-api-access-bkrhl\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.290399 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.290413 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f635248-2bce-4e96-8d9f-3afd345c442b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.550451 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ngzgp" event={"ID":"3f635248-2bce-4e96-8d9f-3afd345c442b","Type":"ContainerDied","Data":"7b9538ec8c1550fe6b9d69258e708c3f8cc1c926bea77c19a6f1d4a607f11664"} Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.550511 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9538ec8c1550fe6b9d69258e708c3f8cc1c926bea77c19a6f1d4a607f11664" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.550556 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ngzgp" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.733019 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.733085 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.753157 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.753506 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-log" containerID="cri-o://b4380389b6837953ced538c553ea23b2564e73417815e6c357383dbae8c87f20" gracePeriod=30 Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.754179 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-api" containerID="cri-o://ccd46802c3c7a393ca2e89d76259d34340d17d9ca85f7873e86d8a062728cbf5" gracePeriod=30 Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.778213 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.778767 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7653d906-63f2-4fce-85ad-84a98160485f" containerName="nova-scheduler-scheduler" containerID="cri-o://74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" gracePeriod=30 Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.838633 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.839235 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-log" containerID="cri-o://b3aab4aba0dfb928ea4971b59a6ee1855ee80620ce5d1ae8a6d4c73ee04e659f" gracePeriod=30 Nov 29 07:29:34 crc kubenswrapper[4731]: I1129 07:29:34.839626 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-metadata" containerID="cri-o://1b2097e4711cb6116fac0eb41fe9d23052f08d826052a70be38e3f8bf42ccfc0" gracePeriod=30 Nov 29 07:29:37 crc kubenswrapper[4731]: E1129 07:29:37.168461 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 07:29:37 crc kubenswrapper[4731]: E1129 07:29:37.173214 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 07:29:37 crc kubenswrapper[4731]: E1129 07:29:37.175743 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 07:29:37 crc kubenswrapper[4731]: E1129 07:29:37.175830 4731 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7653d906-63f2-4fce-85ad-84a98160485f" containerName="nova-scheduler-scheduler" Nov 29 07:29:37 crc kubenswrapper[4731]: I1129 07:29:37.580922 4731 generic.go:334] "Generic (PLEG): container finished" podID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerID="b3aab4aba0dfb928ea4971b59a6ee1855ee80620ce5d1ae8a6d4c73ee04e659f" exitCode=143 Nov 29 07:29:37 crc kubenswrapper[4731]: I1129 07:29:37.580991 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae93ab78-49fb-45cc-b10e-901326d1b1aa","Type":"ContainerDied","Data":"b3aab4aba0dfb928ea4971b59a6ee1855ee80620ce5d1ae8a6d4c73ee04e659f"} Nov 29 07:29:37 crc kubenswrapper[4731]: I1129 07:29:37.583706 4731 generic.go:334] "Generic (PLEG): container finished" podID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerID="b4380389b6837953ced538c553ea23b2564e73417815e6c357383dbae8c87f20" exitCode=143 Nov 29 07:29:37 crc kubenswrapper[4731]: I1129 07:29:37.583730 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1239e9f1-dce0-421c-abaf-bdd016b6cc2f","Type":"ContainerDied","Data":"b4380389b6837953ced538c553ea23b2564e73417815e6c357383dbae8c87f20"} Nov 29 07:29:38 crc kubenswrapper[4731]: I1129 07:29:38.012901 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:34654->10.217.0.193:8775: read: connection reset by peer" Nov 29 07:29:38 crc kubenswrapper[4731]: I1129 07:29:38.013881 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:34662->10.217.0.193:8775: read: connection reset by peer" Nov 29 07:29:39 crc kubenswrapper[4731]: I1129 07:29:39.632879 4731 generic.go:334] "Generic (PLEG): container finished" podID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerID="1b2097e4711cb6116fac0eb41fe9d23052f08d826052a70be38e3f8bf42ccfc0" exitCode=0 Nov 29 07:29:39 crc kubenswrapper[4731]: I1129 07:29:39.632923 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae93ab78-49fb-45cc-b10e-901326d1b1aa","Type":"ContainerDied","Data":"1b2097e4711cb6116fac0eb41fe9d23052f08d826052a70be38e3f8bf42ccfc0"} Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.294988 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.354796 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-config-data\") pod \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.355043 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-nova-metadata-tls-certs\") pod \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.355095 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj77s\" (UniqueName: \"kubernetes.io/projected/ae93ab78-49fb-45cc-b10e-901326d1b1aa-kube-api-access-fj77s\") pod \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.355143 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae93ab78-49fb-45cc-b10e-901326d1b1aa-logs\") pod \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.355274 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-combined-ca-bundle\") pod \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\" (UID: \"ae93ab78-49fb-45cc-b10e-901326d1b1aa\") " Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.356442 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae93ab78-49fb-45cc-b10e-901326d1b1aa-logs" (OuterVolumeSpecName: "logs") pod "ae93ab78-49fb-45cc-b10e-901326d1b1aa" (UID: "ae93ab78-49fb-45cc-b10e-901326d1b1aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.369517 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae93ab78-49fb-45cc-b10e-901326d1b1aa-kube-api-access-fj77s" (OuterVolumeSpecName: "kube-api-access-fj77s") pod "ae93ab78-49fb-45cc-b10e-901326d1b1aa" (UID: "ae93ab78-49fb-45cc-b10e-901326d1b1aa"). InnerVolumeSpecName "kube-api-access-fj77s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.403481 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-config-data" (OuterVolumeSpecName: "config-data") pod "ae93ab78-49fb-45cc-b10e-901326d1b1aa" (UID: "ae93ab78-49fb-45cc-b10e-901326d1b1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.420737 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae93ab78-49fb-45cc-b10e-901326d1b1aa" (UID: "ae93ab78-49fb-45cc-b10e-901326d1b1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.445511 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ae93ab78-49fb-45cc-b10e-901326d1b1aa" (UID: "ae93ab78-49fb-45cc-b10e-901326d1b1aa"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.458254 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.458387 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.458406 4731 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae93ab78-49fb-45cc-b10e-901326d1b1aa-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.458450 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj77s\" (UniqueName: \"kubernetes.io/projected/ae93ab78-49fb-45cc-b10e-901326d1b1aa-kube-api-access-fj77s\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.458464 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae93ab78-49fb-45cc-b10e-901326d1b1aa-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.654692 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae93ab78-49fb-45cc-b10e-901326d1b1aa","Type":"ContainerDied","Data":"a3fdea9a9fb3f7d5baeb65e7f90375d078ee0d0d626feb2a8a62624db1b70b00"} Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.654770 4731 scope.go:117] "RemoveContainer" containerID="1b2097e4711cb6116fac0eb41fe9d23052f08d826052a70be38e3f8bf42ccfc0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.654981 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.669949 4731 generic.go:334] "Generic (PLEG): container finished" podID="7653d906-63f2-4fce-85ad-84a98160485f" containerID="74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" exitCode=0 Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.670029 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7653d906-63f2-4fce-85ad-84a98160485f","Type":"ContainerDied","Data":"74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711"} Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.714354 4731 scope.go:117] "RemoveContainer" containerID="b3aab4aba0dfb928ea4971b59a6ee1855ee80620ce5d1ae8a6d4c73ee04e659f" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.716007 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.722234 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.748906 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:29:40 crc kubenswrapper[4731]: E1129 07:29:40.749323 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f635248-2bce-4e96-8d9f-3afd345c442b" containerName="nova-manage" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749341 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f635248-2bce-4e96-8d9f-3afd345c442b" containerName="nova-manage" Nov 29 07:29:40 crc kubenswrapper[4731]: E1129 07:29:40.749361 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" containerName="dnsmasq-dns" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749369 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" containerName="dnsmasq-dns" Nov 29 07:29:40 crc kubenswrapper[4731]: E1129 07:29:40.749385 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" containerName="init" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749394 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" containerName="init" Nov 29 07:29:40 crc kubenswrapper[4731]: E1129 07:29:40.749412 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-log" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749418 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-log" Nov 29 07:29:40 crc kubenswrapper[4731]: E1129 07:29:40.749430 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-metadata" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749437 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-metadata" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749694 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f635248-2bce-4e96-8d9f-3afd345c442b" containerName="nova-manage" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749730 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d118e0e2-213b-451a-9de7-0e3af1d1bc1a" containerName="dnsmasq-dns" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749749 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-metadata" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.749767 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" containerName="nova-metadata-log" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.759968 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.764193 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.766322 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.769416 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.769518 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.769621 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8bg\" (UniqueName: \"kubernetes.io/projected/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-kube-api-access-pw8bg\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.770185 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-config-data\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.770343 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-logs\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.780266 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.873521 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-config-data\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.873706 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-logs\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.874206 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-logs\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.874695 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.875187 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.875244 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw8bg\" (UniqueName: \"kubernetes.io/projected/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-kube-api-access-pw8bg\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.876472 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-config-data\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.878736 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.880915 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:40 crc kubenswrapper[4731]: I1129 07:29:40.895987 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw8bg\" (UniqueName: \"kubernetes.io/projected/2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843-kube-api-access-pw8bg\") pod \"nova-metadata-0\" (UID: \"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843\") " pod="openstack/nova-metadata-0" Nov 29 07:29:41 crc kubenswrapper[4731]: I1129 07:29:41.822462 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:29:41 crc kubenswrapper[4731]: I1129 07:29:41.829710 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae93ab78-49fb-45cc-b10e-901326d1b1aa" path="/var/lib/kubelet/pods/ae93ab78-49fb-45cc-b10e-901326d1b1aa/volumes" Nov 29 07:29:42 crc kubenswrapper[4731]: E1129 07:29:42.162829 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711 is running failed: container process not found" containerID="74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 07:29:42 crc kubenswrapper[4731]: E1129 07:29:42.163900 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711 is running failed: container process not found" containerID="74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 07:29:42 crc kubenswrapper[4731]: E1129 07:29:42.164288 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711 is running failed: container process not found" containerID="74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 07:29:42 crc kubenswrapper[4731]: E1129 07:29:42.164336 4731 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7653d906-63f2-4fce-85ad-84a98160485f" containerName="nova-scheduler-scheduler" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.195795 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.368240 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.392171 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-config-data\") pod \"7653d906-63f2-4fce-85ad-84a98160485f\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.392305 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-combined-ca-bundle\") pod \"7653d906-63f2-4fce-85ad-84a98160485f\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.392366 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmwh9\" (UniqueName: \"kubernetes.io/projected/7653d906-63f2-4fce-85ad-84a98160485f-kube-api-access-jmwh9\") pod \"7653d906-63f2-4fce-85ad-84a98160485f\" (UID: \"7653d906-63f2-4fce-85ad-84a98160485f\") " Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.401712 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7653d906-63f2-4fce-85ad-84a98160485f-kube-api-access-jmwh9" (OuterVolumeSpecName: "kube-api-access-jmwh9") pod "7653d906-63f2-4fce-85ad-84a98160485f" (UID: "7653d906-63f2-4fce-85ad-84a98160485f"). InnerVolumeSpecName "kube-api-access-jmwh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.427905 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7653d906-63f2-4fce-85ad-84a98160485f" (UID: "7653d906-63f2-4fce-85ad-84a98160485f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.434749 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-config-data" (OuterVolumeSpecName: "config-data") pod "7653d906-63f2-4fce-85ad-84a98160485f" (UID: "7653d906-63f2-4fce-85ad-84a98160485f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.494673 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.495199 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7653d906-63f2-4fce-85ad-84a98160485f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.495213 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmwh9\" (UniqueName: \"kubernetes.io/projected/7653d906-63f2-4fce-85ad-84a98160485f-kube-api-access-jmwh9\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.811796 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843","Type":"ContainerStarted","Data":"2ebc39df647a9523099025724b3f9079c030aad04266e15d17135fb93bb67817"} Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.816370 4731 generic.go:334] "Generic (PLEG): container finished" podID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerID="ccd46802c3c7a393ca2e89d76259d34340d17d9ca85f7873e86d8a062728cbf5" exitCode=0 Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.816485 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1239e9f1-dce0-421c-abaf-bdd016b6cc2f","Type":"ContainerDied","Data":"ccd46802c3c7a393ca2e89d76259d34340d17d9ca85f7873e86d8a062728cbf5"} Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.821434 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7653d906-63f2-4fce-85ad-84a98160485f","Type":"ContainerDied","Data":"00f607b04e404e9bca7ef1cab503c7438955141f1513885662c5d064498449de"} Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.821482 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.821511 4731 scope.go:117] "RemoveContainer" containerID="74262672446b74fb0676151e3db6bee7df4c64535bef56848023ff8e7b057711" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.906818 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.979682 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.989462 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:29:42 crc kubenswrapper[4731]: E1129 07:29:42.989921 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7653d906-63f2-4fce-85ad-84a98160485f" containerName="nova-scheduler-scheduler" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.989938 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7653d906-63f2-4fce-85ad-84a98160485f" containerName="nova-scheduler-scheduler" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.990112 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7653d906-63f2-4fce-85ad-84a98160485f" containerName="nova-scheduler-scheduler" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.990830 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:29:42 crc kubenswrapper[4731]: I1129 07:29:42.993332 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.016132 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.110951 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94d1b1c-5dfb-429f-ae00-3082948d94d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.111584 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94d1b1c-5dfb-429f-ae00-3082948d94d7-config-data\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.111745 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpnfj\" (UniqueName: \"kubernetes.io/projected/a94d1b1c-5dfb-429f-ae00-3082948d94d7-kube-api-access-jpnfj\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.212738 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94d1b1c-5dfb-429f-ae00-3082948d94d7-config-data\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.212825 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpnfj\" (UniqueName: \"kubernetes.io/projected/a94d1b1c-5dfb-429f-ae00-3082948d94d7-kube-api-access-jpnfj\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.212937 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94d1b1c-5dfb-429f-ae00-3082948d94d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.222991 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94d1b1c-5dfb-429f-ae00-3082948d94d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.238591 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpnfj\" (UniqueName: \"kubernetes.io/projected/a94d1b1c-5dfb-429f-ae00-3082948d94d7-kube-api-access-jpnfj\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.238590 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94d1b1c-5dfb-429f-ae00-3082948d94d7-config-data\") pod \"nova-scheduler-0\" (UID: \"a94d1b1c-5dfb-429f-ae00-3082948d94d7\") " pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.319489 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.497976 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.621824 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr5h8\" (UniqueName: \"kubernetes.io/projected/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-kube-api-access-nr5h8\") pod \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.621975 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-internal-tls-certs\") pod \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.622018 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-public-tls-certs\") pod \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.622067 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-logs\") pod \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.622098 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-config-data\") pod \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.622148 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-combined-ca-bundle\") pod \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\" (UID: \"1239e9f1-dce0-421c-abaf-bdd016b6cc2f\") " Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.623008 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-logs" (OuterVolumeSpecName: "logs") pod "1239e9f1-dce0-421c-abaf-bdd016b6cc2f" (UID: "1239e9f1-dce0-421c-abaf-bdd016b6cc2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.638050 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-kube-api-access-nr5h8" (OuterVolumeSpecName: "kube-api-access-nr5h8") pod "1239e9f1-dce0-421c-abaf-bdd016b6cc2f" (UID: "1239e9f1-dce0-421c-abaf-bdd016b6cc2f"). InnerVolumeSpecName "kube-api-access-nr5h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.668245 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1239e9f1-dce0-421c-abaf-bdd016b6cc2f" (UID: "1239e9f1-dce0-421c-abaf-bdd016b6cc2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.671300 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-config-data" (OuterVolumeSpecName: "config-data") pod "1239e9f1-dce0-421c-abaf-bdd016b6cc2f" (UID: "1239e9f1-dce0-421c-abaf-bdd016b6cc2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.682281 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1239e9f1-dce0-421c-abaf-bdd016b6cc2f" (UID: "1239e9f1-dce0-421c-abaf-bdd016b6cc2f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.692719 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1239e9f1-dce0-421c-abaf-bdd016b6cc2f" (UID: "1239e9f1-dce0-421c-abaf-bdd016b6cc2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.725923 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.726769 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.726788 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.726804 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.726817 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.726832 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr5h8\" (UniqueName: \"kubernetes.io/projected/1239e9f1-dce0-421c-abaf-bdd016b6cc2f-kube-api-access-nr5h8\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.821767 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7653d906-63f2-4fce-85ad-84a98160485f" path="/var/lib/kubelet/pods/7653d906-63f2-4fce-85ad-84a98160485f/volumes" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.839771 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843","Type":"ContainerStarted","Data":"6b99b06521810442e1dcee6b0dfaed0a7c3d74a633cd6e14abd5ed8aff868579"} Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.839879 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843","Type":"ContainerStarted","Data":"a50a439ab7589792a9e11daa12db64727f551dbd8ee7f851a8b390a82a94606a"} Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.847814 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1239e9f1-dce0-421c-abaf-bdd016b6cc2f","Type":"ContainerDied","Data":"8e01a3c9c6b6fcc5c7aad5546d30970a382cf45b45327c10e15c224d258bb909"} Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.847883 4731 scope.go:117] "RemoveContainer" containerID="ccd46802c3c7a393ca2e89d76259d34340d17d9ca85f7873e86d8a062728cbf5" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.847927 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.869448 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.869427986 podStartE2EDuration="3.869427986s" podCreationTimestamp="2025-11-29 07:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:43.867083339 +0000 UTC m=+1422.757444442" watchObservedRunningTime="2025-11-29 07:29:43.869427986 +0000 UTC m=+1422.759789089" Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.912275 4731 scope.go:117] "RemoveContainer" containerID="b4380389b6837953ced538c553ea23b2564e73417815e6c357383dbae8c87f20" Nov 29 07:29:43 crc kubenswrapper[4731]: W1129 07:29:43.927842 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda94d1b1c_5dfb_429f_ae00_3082948d94d7.slice/crio-9b5660515a13c1565204baef4b9a5c361e743b33638c61c198eadc5c4ceba480 WatchSource:0}: Error finding container 9b5660515a13c1565204baef4b9a5c361e743b33638c61c198eadc5c4ceba480: Status 404 returned error can't find the container with id 9b5660515a13c1565204baef4b9a5c361e743b33638c61c198eadc5c4ceba480 Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.942739 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.962417 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:43 crc kubenswrapper[4731]: I1129 07:29:43.987087 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.011130 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:44 crc kubenswrapper[4731]: E1129 07:29:44.011860 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-log" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.011896 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-log" Nov 29 07:29:44 crc kubenswrapper[4731]: E1129 07:29:44.011907 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-api" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.011918 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-api" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.012202 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-log" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.012223 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" containerName="nova-api-api" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.013915 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.017665 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.018153 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.018255 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.027404 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.136060 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066ba538-6f57-4399-bf84-d4f2aa5c605b-logs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.136317 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-config-data\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.136370 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.136684 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhd4c\" (UniqueName: \"kubernetes.io/projected/066ba538-6f57-4399-bf84-d4f2aa5c605b-kube-api-access-hhd4c\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.137017 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-public-tls-certs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.137071 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.239411 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-config-data\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.239813 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.239882 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhd4c\" (UniqueName: \"kubernetes.io/projected/066ba538-6f57-4399-bf84-d4f2aa5c605b-kube-api-access-hhd4c\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.239956 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-public-tls-certs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.239991 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.240050 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066ba538-6f57-4399-bf84-d4f2aa5c605b-logs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.240455 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066ba538-6f57-4399-bf84-d4f2aa5c605b-logs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.243938 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-config-data\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.244935 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.245222 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-public-tls-certs\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.246162 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066ba538-6f57-4399-bf84-d4f2aa5c605b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.264825 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhd4c\" (UniqueName: \"kubernetes.io/projected/066ba538-6f57-4399-bf84-d4f2aa5c605b-kube-api-access-hhd4c\") pod \"nova-api-0\" (UID: \"066ba538-6f57-4399-bf84-d4f2aa5c605b\") " pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.449141 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.866775 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a94d1b1c-5dfb-429f-ae00-3082948d94d7","Type":"ContainerStarted","Data":"859fe4dd39bc8258fc845ba71fd94ba97be28b3ca5734a76f184b72551de62e0"} Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.867064 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a94d1b1c-5dfb-429f-ae00-3082948d94d7","Type":"ContainerStarted","Data":"9b5660515a13c1565204baef4b9a5c361e743b33638c61c198eadc5c4ceba480"} Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.894089 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.894055975 podStartE2EDuration="2.894055975s" podCreationTimestamp="2025-11-29 07:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:44.886197931 +0000 UTC m=+1423.776559054" watchObservedRunningTime="2025-11-29 07:29:44.894055975 +0000 UTC m=+1423.784417098" Nov 29 07:29:44 crc kubenswrapper[4731]: I1129 07:29:44.944860 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:29:45 crc kubenswrapper[4731]: I1129 07:29:45.821389 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1239e9f1-dce0-421c-abaf-bdd016b6cc2f" path="/var/lib/kubelet/pods/1239e9f1-dce0-421c-abaf-bdd016b6cc2f/volumes" Nov 29 07:29:45 crc kubenswrapper[4731]: I1129 07:29:45.882472 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"066ba538-6f57-4399-bf84-d4f2aa5c605b","Type":"ContainerStarted","Data":"a74dd0b10d64202f4920fd5e77e1870d7f42bef0d804946794a5b53489e0a647"} Nov 29 07:29:45 crc kubenswrapper[4731]: I1129 07:29:45.882535 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"066ba538-6f57-4399-bf84-d4f2aa5c605b","Type":"ContainerStarted","Data":"23d8d880521063631c2e73971925515fb3bb41f78f79be4d7dc3a35dc9cda1c6"} Nov 29 07:29:45 crc kubenswrapper[4731]: I1129 07:29:45.882549 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"066ba538-6f57-4399-bf84-d4f2aa5c605b","Type":"ContainerStarted","Data":"c50abff90e3c5ce8c39fd17a51d4f11fc1d9b1426cbf94254ca6d90448ec8643"} Nov 29 07:29:45 crc kubenswrapper[4731]: I1129 07:29:45.916768 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.916743019 podStartE2EDuration="2.916743019s" podCreationTimestamp="2025-11-29 07:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:45.903081178 +0000 UTC m=+1424.793442281" watchObservedRunningTime="2025-11-29 07:29:45.916743019 +0000 UTC m=+1424.807104122" Nov 29 07:29:46 crc kubenswrapper[4731]: I1129 07:29:46.823458 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:29:46 crc kubenswrapper[4731]: I1129 07:29:46.824444 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:29:48 crc kubenswrapper[4731]: I1129 07:29:48.320622 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:29:51 crc kubenswrapper[4731]: I1129 07:29:51.823220 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:29:51 crc kubenswrapper[4731]: I1129 07:29:51.824759 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:29:52 crc kubenswrapper[4731]: I1129 07:29:52.841989 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:52 crc kubenswrapper[4731]: I1129 07:29:52.842186 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:53 crc kubenswrapper[4731]: I1129 07:29:53.320080 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 07:29:53 crc kubenswrapper[4731]: I1129 07:29:53.351189 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 07:29:54 crc kubenswrapper[4731]: I1129 07:29:54.007419 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 07:29:54 crc kubenswrapper[4731]: I1129 07:29:54.450341 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:29:54 crc kubenswrapper[4731]: I1129 07:29:54.451307 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:29:55 crc kubenswrapper[4731]: I1129 07:29:55.465917 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="066ba538-6f57-4399-bf84-d4f2aa5c605b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:55 crc kubenswrapper[4731]: I1129 07:29:55.465978 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="066ba538-6f57-4399-bf84-d4f2aa5c605b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:29:57 crc kubenswrapper[4731]: I1129 07:29:57.905543 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.162828 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p"] Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.168670 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.173925 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.173925 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.182889 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p"] Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.343993 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9900c053-e3a1-43bb-a13b-5e92ba495ed8-secret-volume\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.344075 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzwn\" (UniqueName: \"kubernetes.io/projected/9900c053-e3a1-43bb-a13b-5e92ba495ed8-kube-api-access-4bzwn\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.344188 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9900c053-e3a1-43bb-a13b-5e92ba495ed8-config-volume\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.446377 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9900c053-e3a1-43bb-a13b-5e92ba495ed8-secret-volume\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.446460 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzwn\" (UniqueName: \"kubernetes.io/projected/9900c053-e3a1-43bb-a13b-5e92ba495ed8-kube-api-access-4bzwn\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.446548 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9900c053-e3a1-43bb-a13b-5e92ba495ed8-config-volume\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.447754 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9900c053-e3a1-43bb-a13b-5e92ba495ed8-config-volume\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.464768 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9900c053-e3a1-43bb-a13b-5e92ba495ed8-secret-volume\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.468780 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzwn\" (UniqueName: \"kubernetes.io/projected/9900c053-e3a1-43bb-a13b-5e92ba495ed8-kube-api-access-4bzwn\") pod \"collect-profiles-29406690-txp5p\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:00 crc kubenswrapper[4731]: I1129 07:30:00.508512 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:01 crc kubenswrapper[4731]: I1129 07:30:01.030488 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p"] Nov 29 07:30:01 crc kubenswrapper[4731]: I1129 07:30:01.057284 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" event={"ID":"9900c053-e3a1-43bb-a13b-5e92ba495ed8","Type":"ContainerStarted","Data":"c54715c2c67f86013d021105ca98aa87d05044ba64cdbf88b4a84efb8a5bc3f8"} Nov 29 07:30:01 crc kubenswrapper[4731]: I1129 07:30:01.838179 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:30:01 crc kubenswrapper[4731]: I1129 07:30:01.838696 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:30:01 crc kubenswrapper[4731]: I1129 07:30:01.849525 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:30:01 crc kubenswrapper[4731]: I1129 07:30:01.850928 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:30:02 crc kubenswrapper[4731]: I1129 07:30:02.071847 4731 generic.go:334] "Generic (PLEG): container finished" podID="9900c053-e3a1-43bb-a13b-5e92ba495ed8" containerID="d14eeb7040af8c0985747625e6f09db2b3ba2d0f9fad9a06a771c9442ded7ffe" exitCode=0 Nov 29 07:30:02 crc kubenswrapper[4731]: I1129 07:30:02.071906 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" event={"ID":"9900c053-e3a1-43bb-a13b-5e92ba495ed8","Type":"ContainerDied","Data":"d14eeb7040af8c0985747625e6f09db2b3ba2d0f9fad9a06a771c9442ded7ffe"} Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.003414 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.003515 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.425548 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.622181 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9900c053-e3a1-43bb-a13b-5e92ba495ed8-config-volume\") pod \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.622456 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9900c053-e3a1-43bb-a13b-5e92ba495ed8-secret-volume\") pod \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.622946 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9900c053-e3a1-43bb-a13b-5e92ba495ed8-config-volume" (OuterVolumeSpecName: "config-volume") pod "9900c053-e3a1-43bb-a13b-5e92ba495ed8" (UID: "9900c053-e3a1-43bb-a13b-5e92ba495ed8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.623688 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bzwn\" (UniqueName: \"kubernetes.io/projected/9900c053-e3a1-43bb-a13b-5e92ba495ed8-kube-api-access-4bzwn\") pod \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\" (UID: \"9900c053-e3a1-43bb-a13b-5e92ba495ed8\") " Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.624318 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9900c053-e3a1-43bb-a13b-5e92ba495ed8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.629312 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9900c053-e3a1-43bb-a13b-5e92ba495ed8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9900c053-e3a1-43bb-a13b-5e92ba495ed8" (UID: "9900c053-e3a1-43bb-a13b-5e92ba495ed8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.629657 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9900c053-e3a1-43bb-a13b-5e92ba495ed8-kube-api-access-4bzwn" (OuterVolumeSpecName: "kube-api-access-4bzwn") pod "9900c053-e3a1-43bb-a13b-5e92ba495ed8" (UID: "9900c053-e3a1-43bb-a13b-5e92ba495ed8"). InnerVolumeSpecName "kube-api-access-4bzwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.727199 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bzwn\" (UniqueName: \"kubernetes.io/projected/9900c053-e3a1-43bb-a13b-5e92ba495ed8-kube-api-access-4bzwn\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:03 crc kubenswrapper[4731]: I1129 07:30:03.727246 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9900c053-e3a1-43bb-a13b-5e92ba495ed8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:04 crc kubenswrapper[4731]: I1129 07:30:04.095346 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" event={"ID":"9900c053-e3a1-43bb-a13b-5e92ba495ed8","Type":"ContainerDied","Data":"c54715c2c67f86013d021105ca98aa87d05044ba64cdbf88b4a84efb8a5bc3f8"} Nov 29 07:30:04 crc kubenswrapper[4731]: I1129 07:30:04.095402 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54715c2c67f86013d021105ca98aa87d05044ba64cdbf88b4a84efb8a5bc3f8" Nov 29 07:30:04 crc kubenswrapper[4731]: I1129 07:30:04.095451 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p" Nov 29 07:30:04 crc kubenswrapper[4731]: I1129 07:30:04.459946 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:30:04 crc kubenswrapper[4731]: I1129 07:30:04.460917 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:30:04 crc kubenswrapper[4731]: I1129 07:30:04.467162 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:30:04 crc kubenswrapper[4731]: I1129 07:30:04.480139 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:30:05 crc kubenswrapper[4731]: I1129 07:30:05.106858 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:30:05 crc kubenswrapper[4731]: I1129 07:30:05.113531 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:30:14 crc kubenswrapper[4731]: I1129 07:30:14.090979 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:30:15 crc kubenswrapper[4731]: I1129 07:30:15.410508 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:30:19 crc kubenswrapper[4731]: I1129 07:30:19.204671 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerName="rabbitmq" containerID="cri-o://7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a" gracePeriod=604795 Nov 29 07:30:19 crc kubenswrapper[4731]: I1129 07:30:19.960070 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerName="rabbitmq" containerID="cri-o://10f29ddabb4a1ac08fdc4c893d847b076f8ee7d953330eecd5af7042855d069e" gracePeriod=604796 Nov 29 07:30:24 crc kubenswrapper[4731]: I1129 07:30:24.675846 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 29 07:30:24 crc kubenswrapper[4731]: I1129 07:30:24.765367 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 29 07:30:25 crc kubenswrapper[4731]: I1129 07:30:25.928676 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.071788 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.071879 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-tls\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072009 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-config-data\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072103 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-pod-info\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072195 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-plugins-conf\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072285 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-erlang-cookie-secret\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072339 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-erlang-cookie\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072375 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-server-conf\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072416 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d5b5\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-kube-api-access-2d5b5\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072531 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-confd\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.072579 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-plugins\") pod \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\" (UID: \"ff2928d9-150f-4305-a1bd-6a87ee7b40cc\") " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.073518 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.073664 4731 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.074000 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.074078 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.082865 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.083345 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-kube-api-access-2d5b5" (OuterVolumeSpecName: "kube-api-access-2d5b5") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "kube-api-access-2d5b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.083738 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.086315 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-pod-info" (OuterVolumeSpecName: "pod-info") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.102524 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.117644 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-config-data" (OuterVolumeSpecName: "config-data") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.178071 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.178122 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.178140 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.178151 4731 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.178163 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.178176 4731 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.178187 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d5b5\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-kube-api-access-2d5b5\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.178197 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.192503 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-server-conf" (OuterVolumeSpecName: "server-conf") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.213694 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.246941 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ff2928d9-150f-4305-a1bd-6a87ee7b40cc" (UID: "ff2928d9-150f-4305-a1bd-6a87ee7b40cc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.280179 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.280253 4731 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.280264 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ff2928d9-150f-4305-a1bd-6a87ee7b40cc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.352824 4731 generic.go:334] "Generic (PLEG): container finished" podID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerID="10f29ddabb4a1ac08fdc4c893d847b076f8ee7d953330eecd5af7042855d069e" exitCode=0 Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.352938 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7971e0f-0e23-4782-9766-4841f04ac1e7","Type":"ContainerDied","Data":"10f29ddabb4a1ac08fdc4c893d847b076f8ee7d953330eecd5af7042855d069e"} Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.369214 4731 generic.go:334] "Generic (PLEG): container finished" podID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerID="7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a" exitCode=0 Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.369276 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff2928d9-150f-4305-a1bd-6a87ee7b40cc","Type":"ContainerDied","Data":"7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a"} Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.369312 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff2928d9-150f-4305-a1bd-6a87ee7b40cc","Type":"ContainerDied","Data":"d3f55ca44dad32ec6f1b7d50b6ecca8babe0c9408373014eb593770d9b3e6641"} Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.369333 4731 scope.go:117] "RemoveContainer" containerID="7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.369602 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.467891 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.494643 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.510050 4731 scope.go:117] "RemoveContainer" containerID="0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.522234 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:30:26 crc kubenswrapper[4731]: E1129 07:30:26.522936 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerName="rabbitmq" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.522954 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerName="rabbitmq" Nov 29 07:30:26 crc kubenswrapper[4731]: E1129 07:30:26.522973 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9900c053-e3a1-43bb-a13b-5e92ba495ed8" containerName="collect-profiles" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.522981 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9900c053-e3a1-43bb-a13b-5e92ba495ed8" containerName="collect-profiles" Nov 29 07:30:26 crc kubenswrapper[4731]: E1129 07:30:26.523006 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerName="setup-container" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.523017 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerName="setup-container" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.523250 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" containerName="rabbitmq" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.523274 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9900c053-e3a1-43bb-a13b-5e92ba495ed8" containerName="collect-profiles" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.524789 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.550386 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.550645 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.550684 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.550822 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.550919 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k5dcg" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.551084 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.551437 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.563770 4731 scope.go:117] "RemoveContainer" containerID="7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a" Nov 29 07:30:26 crc kubenswrapper[4731]: E1129 07:30:26.565639 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a\": container with ID starting with 7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a not found: ID does not exist" containerID="7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.565693 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a"} err="failed to get container status \"7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a\": rpc error: code = NotFound desc = could not find container \"7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a\": container with ID starting with 7e2ba846ad51505dc6ba0bfc9ca7a0dc9ead93b752b9a87034b3d025201e802a not found: ID does not exist" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.565729 4731 scope.go:117] "RemoveContainer" containerID="0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4" Nov 29 07:30:26 crc kubenswrapper[4731]: E1129 07:30:26.568509 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4\": container with ID starting with 0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4 not found: ID does not exist" containerID="0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.568547 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4"} err="failed to get container status \"0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4\": rpc error: code = NotFound desc = could not find container \"0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4\": container with ID starting with 0f1cca498c8ac89e448453e329b710b354c3bc57f22d4761166594662706c6f4 not found: ID does not exist" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.580913 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593511 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593626 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593644 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593669 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96683d18-3f61-486f-bc69-5a253f2538cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593716 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593757 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593774 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593834 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96683d18-3f61-486f-bc69-5a253f2538cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593893 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593922 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.593948 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8tj6\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-kube-api-access-p8tj6\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705378 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705663 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705690 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96683d18-3f61-486f-bc69-5a253f2538cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705733 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705776 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705797 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705830 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96683d18-3f61-486f-bc69-5a253f2538cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705863 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705884 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705915 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8tj6\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-kube-api-access-p8tj6\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.705947 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.706218 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.706444 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.707717 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.709783 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.710432 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96683d18-3f61-486f-bc69-5a253f2538cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.710995 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.715322 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96683d18-3f61-486f-bc69-5a253f2538cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.717376 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.723788 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96683d18-3f61-486f-bc69-5a253f2538cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.733823 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8tj6\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-kube-api-access-p8tj6\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.743372 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96683d18-3f61-486f-bc69-5a253f2538cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.766228 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"96683d18-3f61-486f-bc69-5a253f2538cc\") " pod="openstack/rabbitmq-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.892202 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:26 crc kubenswrapper[4731]: I1129 07:30:26.913374 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.012974 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-confd\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.013053 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7971e0f-0e23-4782-9766-4841f04ac1e7-erlang-cookie-secret\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.013113 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-erlang-cookie\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.013172 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-config-data\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.013227 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7971e0f-0e23-4782-9766-4841f04ac1e7-pod-info\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.013317 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-tls\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.015068 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.016717 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-server-conf\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.016845 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-plugins\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.016911 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-plugins-conf\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.016940 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b9r6\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-kube-api-access-8b9r6\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.017021 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"d7971e0f-0e23-4782-9766-4841f04ac1e7\" (UID: \"d7971e0f-0e23-4782-9766-4841f04ac1e7\") " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.018207 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.019158 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.019582 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.020252 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7971e0f-0e23-4782-9766-4841f04ac1e7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.025847 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.028521 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.028816 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-kube-api-access-8b9r6" (OuterVolumeSpecName: "kube-api-access-8b9r6") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "kube-api-access-8b9r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.039683 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d7971e0f-0e23-4782-9766-4841f04ac1e7-pod-info" (OuterVolumeSpecName: "pod-info") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.080644 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-config-data" (OuterVolumeSpecName: "config-data") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.088046 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-server-conf" (OuterVolumeSpecName: "server-conf") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122137 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122172 4731 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122182 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b9r6\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-kube-api-access-8b9r6\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122225 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122236 4731 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7971e0f-0e23-4782-9766-4841f04ac1e7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122245 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122254 4731 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7971e0f-0e23-4782-9766-4841f04ac1e7-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122263 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.122271 4731 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7971e0f-0e23-4782-9766-4841f04ac1e7-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.136173 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d7971e0f-0e23-4782-9766-4841f04ac1e7" (UID: "d7971e0f-0e23-4782-9766-4841f04ac1e7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.164615 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.224133 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.224166 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7971e0f-0e23-4782-9766-4841f04ac1e7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.386257 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7971e0f-0e23-4782-9766-4841f04ac1e7","Type":"ContainerDied","Data":"c33a117aee2f4db73d3dee9f25e8cc2de0898802d935f30f0f6cfc1888c9e387"} Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.386341 4731 scope.go:117] "RemoveContainer" containerID="10f29ddabb4a1ac08fdc4c893d847b076f8ee7d953330eecd5af7042855d069e" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.386372 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.413014 4731 scope.go:117] "RemoveContainer" containerID="f40118db8ab07db8de5595473f72aed1dea64c65ae58bf29725a18caee3c64bc" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.432125 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.448135 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.465943 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:30:27 crc kubenswrapper[4731]: E1129 07:30:27.466675 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerName="rabbitmq" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.466705 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerName="rabbitmq" Nov 29 07:30:27 crc kubenswrapper[4731]: E1129 07:30:27.466760 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerName="setup-container" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.466771 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerName="setup-container" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.466990 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7971e0f-0e23-4782-9766-4841f04ac1e7" containerName="rabbitmq" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.468293 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.474590 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.474954 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.475173 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.475361 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.475551 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qxd6z" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.475675 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.475752 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.486857 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531049 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531112 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531146 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531184 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531226 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9lrp\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-kube-api-access-h9lrp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531263 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531294 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531388 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531464 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b079dae9-1f5d-4057-ae41-4273aaabeab8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531488 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.531518 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b079dae9-1f5d-4057-ae41-4273aaabeab8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.541872 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.634284 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635744 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b079dae9-1f5d-4057-ae41-4273aaabeab8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635780 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635816 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b079dae9-1f5d-4057-ae41-4273aaabeab8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635863 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635891 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635916 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635953 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635992 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9lrp\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-kube-api-access-h9lrp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.636026 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.636056 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.637070 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.635332 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.637374 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.637730 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.641549 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.642263 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b079dae9-1f5d-4057-ae41-4273aaabeab8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.642757 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b079dae9-1f5d-4057-ae41-4273aaabeab8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.644887 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.646387 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b079dae9-1f5d-4057-ae41-4273aaabeab8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.652168 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.659290 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9lrp\" (UniqueName: \"kubernetes.io/projected/b079dae9-1f5d-4057-ae41-4273aaabeab8-kube-api-access-h9lrp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.670687 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b079dae9-1f5d-4057-ae41-4273aaabeab8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.792504 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.821405 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7971e0f-0e23-4782-9766-4841f04ac1e7" path="/var/lib/kubelet/pods/d7971e0f-0e23-4782-9766-4841f04ac1e7/volumes" Nov 29 07:30:27 crc kubenswrapper[4731]: I1129 07:30:27.822579 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2928d9-150f-4305-a1bd-6a87ee7b40cc" path="/var/lib/kubelet/pods/ff2928d9-150f-4305-a1bd-6a87ee7b40cc/volumes" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.284025 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:30:28 crc kubenswrapper[4731]: W1129 07:30:28.303472 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb079dae9_1f5d_4057_ae41_4273aaabeab8.slice/crio-5d7a282c65e1bbccc22163d0fdf5e68699048209a7f8dd3e1a8a30ad6af58080 WatchSource:0}: Error finding container 5d7a282c65e1bbccc22163d0fdf5e68699048209a7f8dd3e1a8a30ad6af58080: Status 404 returned error can't find the container with id 5d7a282c65e1bbccc22163d0fdf5e68699048209a7f8dd3e1a8a30ad6af58080 Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.404006 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b079dae9-1f5d-4057-ae41-4273aaabeab8","Type":"ContainerStarted","Data":"5d7a282c65e1bbccc22163d0fdf5e68699048209a7f8dd3e1a8a30ad6af58080"} Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.407803 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96683d18-3f61-486f-bc69-5a253f2538cc","Type":"ContainerStarted","Data":"423845bbe4da23e6bb0a50f4949a955c760e849d7537a77251feac0b7da4a986"} Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.835560 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r54"] Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.838043 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.842323 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.861150 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r54"] Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.870844 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-svc\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.870914 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.870965 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.871019 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.871102 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.871140 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78kft\" (UniqueName: \"kubernetes.io/projected/4d914548-9ec6-4746-acec-281a93e0aa8a-kube-api-access-78kft\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.871195 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-config\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.973441 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-svc\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.973857 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.974009 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.974128 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.974294 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.974403 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78kft\" (UniqueName: \"kubernetes.io/projected/4d914548-9ec6-4746-acec-281a93e0aa8a-kube-api-access-78kft\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.974475 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-svc\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.974794 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-config\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.974805 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.975176 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.975580 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.975770 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.975779 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-config\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:28 crc kubenswrapper[4731]: I1129 07:30:28.996058 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78kft\" (UniqueName: \"kubernetes.io/projected/4d914548-9ec6-4746-acec-281a93e0aa8a-kube-api-access-78kft\") pod \"dnsmasq-dns-d558885bc-j4r54\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:29 crc kubenswrapper[4731]: I1129 07:30:29.162310 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:29 crc kubenswrapper[4731]: I1129 07:30:29.706155 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r54"] Nov 29 07:30:30 crc kubenswrapper[4731]: I1129 07:30:30.437853 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b079dae9-1f5d-4057-ae41-4273aaabeab8","Type":"ContainerStarted","Data":"5ca84dfe602693f6e1b67640e482479f357375b13ad2b881d158c5033ec76464"} Nov 29 07:30:30 crc kubenswrapper[4731]: I1129 07:30:30.443495 4731 generic.go:334] "Generic (PLEG): container finished" podID="4d914548-9ec6-4746-acec-281a93e0aa8a" containerID="f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c" exitCode=0 Nov 29 07:30:30 crc kubenswrapper[4731]: I1129 07:30:30.443615 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r54" event={"ID":"4d914548-9ec6-4746-acec-281a93e0aa8a","Type":"ContainerDied","Data":"f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c"} Nov 29 07:30:30 crc kubenswrapper[4731]: I1129 07:30:30.443643 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r54" event={"ID":"4d914548-9ec6-4746-acec-281a93e0aa8a","Type":"ContainerStarted","Data":"63ce8988387eb1cc1a46ff45e4a02f071b869961ca5cb107f19b0fb1046487e5"} Nov 29 07:30:30 crc kubenswrapper[4731]: I1129 07:30:30.453193 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96683d18-3f61-486f-bc69-5a253f2538cc","Type":"ContainerStarted","Data":"f31a8020d3923d0c9fe18a1e20ab766506366c6916eca2501590190d4095d48b"} Nov 29 07:30:31 crc kubenswrapper[4731]: I1129 07:30:31.470794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r54" event={"ID":"4d914548-9ec6-4746-acec-281a93e0aa8a","Type":"ContainerStarted","Data":"c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9"} Nov 29 07:30:31 crc kubenswrapper[4731]: I1129 07:30:31.519101 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-j4r54" podStartSLOduration=3.519070832 podStartE2EDuration="3.519070832s" podCreationTimestamp="2025-11-29 07:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:30:31.506248875 +0000 UTC m=+1470.396609988" watchObservedRunningTime="2025-11-29 07:30:31.519070832 +0000 UTC m=+1470.409431935" Nov 29 07:30:32 crc kubenswrapper[4731]: I1129 07:30:32.503457 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:33 crc kubenswrapper[4731]: I1129 07:30:33.003290 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:30:33 crc kubenswrapper[4731]: I1129 07:30:33.003392 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.163778 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.255774 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-wspqf"] Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.256280 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" podUID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" containerName="dnsmasq-dns" containerID="cri-o://bb2af2da61b08093398ffb704b8165510e5fcde4d9062b23da6917f81738c2c6" gracePeriod=10 Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.428787 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-k72xg"] Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.431534 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.445433 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-k72xg"] Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.539599 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.539712 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-config\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.539775 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5ml\" (UniqueName: \"kubernetes.io/projected/94d1ff36-d633-4055-be35-a5c572c64f68-kube-api-access-mz5ml\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.539892 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.539954 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.540006 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.540025 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.579272 4731 generic.go:334] "Generic (PLEG): container finished" podID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" containerID="bb2af2da61b08093398ffb704b8165510e5fcde4d9062b23da6917f81738c2c6" exitCode=0 Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.579338 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" event={"ID":"e5afc5eb-08df-4f82-b357-f1672ff71eaa","Type":"ContainerDied","Data":"bb2af2da61b08093398ffb704b8165510e5fcde4d9062b23da6917f81738c2c6"} Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.642101 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.642186 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.642255 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.642356 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-config\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.642413 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5ml\" (UniqueName: \"kubernetes.io/projected/94d1ff36-d633-4055-be35-a5c572c64f68-kube-api-access-mz5ml\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.642545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.642635 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.643897 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.644471 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.645138 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.645264 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.645315 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.645790 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d1ff36-d633-4055-be35-a5c572c64f68-config\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.677897 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5ml\" (UniqueName: \"kubernetes.io/projected/94d1ff36-d633-4055-be35-a5c572c64f68-kube-api-access-mz5ml\") pod \"dnsmasq-dns-78c64bc9c5-k72xg\" (UID: \"94d1ff36-d633-4055-be35-a5c572c64f68\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:39 crc kubenswrapper[4731]: I1129 07:30:39.792637 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.284544 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-k72xg"] Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.324072 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.471836 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-swift-storage-0\") pod \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.472463 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-sb\") pod \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.472496 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2xw\" (UniqueName: \"kubernetes.io/projected/e5afc5eb-08df-4f82-b357-f1672ff71eaa-kube-api-access-dq2xw\") pod \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.472538 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-config\") pod \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.472619 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-nb\") pod \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.472768 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-svc\") pod \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\" (UID: \"e5afc5eb-08df-4f82-b357-f1672ff71eaa\") " Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.482388 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5afc5eb-08df-4f82-b357-f1672ff71eaa-kube-api-access-dq2xw" (OuterVolumeSpecName: "kube-api-access-dq2xw") pod "e5afc5eb-08df-4f82-b357-f1672ff71eaa" (UID: "e5afc5eb-08df-4f82-b357-f1672ff71eaa"). InnerVolumeSpecName "kube-api-access-dq2xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.535248 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5afc5eb-08df-4f82-b357-f1672ff71eaa" (UID: "e5afc5eb-08df-4f82-b357-f1672ff71eaa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.540499 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5afc5eb-08df-4f82-b357-f1672ff71eaa" (UID: "e5afc5eb-08df-4f82-b357-f1672ff71eaa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.548107 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e5afc5eb-08df-4f82-b357-f1672ff71eaa" (UID: "e5afc5eb-08df-4f82-b357-f1672ff71eaa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.550707 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-config" (OuterVolumeSpecName: "config") pod "e5afc5eb-08df-4f82-b357-f1672ff71eaa" (UID: "e5afc5eb-08df-4f82-b357-f1672ff71eaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.552730 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5afc5eb-08df-4f82-b357-f1672ff71eaa" (UID: "e5afc5eb-08df-4f82-b357-f1672ff71eaa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.579117 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.579172 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.579188 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2xw\" (UniqueName: \"kubernetes.io/projected/e5afc5eb-08df-4f82-b357-f1672ff71eaa-kube-api-access-dq2xw\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.579213 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.579228 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.579241 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5afc5eb-08df-4f82-b357-f1672ff71eaa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.611801 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.611821 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-wspqf" event={"ID":"e5afc5eb-08df-4f82-b357-f1672ff71eaa","Type":"ContainerDied","Data":"2137ecf54d25de4cb61cfdc8925e4dc72b03ee871c4aa170890aa3db752a6de7"} Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.614082 4731 scope.go:117] "RemoveContainer" containerID="bb2af2da61b08093398ffb704b8165510e5fcde4d9062b23da6917f81738c2c6" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.617030 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" event={"ID":"94d1ff36-d633-4055-be35-a5c572c64f68","Type":"ContainerStarted","Data":"ae5384d8cbafacb2a2498ba302c5871cdcb9ade010342263e190719545c20d88"} Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.701878 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-wspqf"] Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.702815 4731 scope.go:117] "RemoveContainer" containerID="a9749adf34e60a9a5b473b8cae55bcb9c53e1e9218130e6a72abcc781a667185" Nov 29 07:30:40 crc kubenswrapper[4731]: I1129 07:30:40.712438 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-wspqf"] Nov 29 07:30:41 crc kubenswrapper[4731]: I1129 07:30:41.629087 4731 generic.go:334] "Generic (PLEG): container finished" podID="94d1ff36-d633-4055-be35-a5c572c64f68" containerID="1503cf92c358d2e92dedb437470578ea48968e976350fec3de28eae0d8e4307d" exitCode=0 Nov 29 07:30:41 crc kubenswrapper[4731]: I1129 07:30:41.629191 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" event={"ID":"94d1ff36-d633-4055-be35-a5c572c64f68","Type":"ContainerDied","Data":"1503cf92c358d2e92dedb437470578ea48968e976350fec3de28eae0d8e4307d"} Nov 29 07:30:41 crc kubenswrapper[4731]: I1129 07:30:41.836085 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" path="/var/lib/kubelet/pods/e5afc5eb-08df-4f82-b357-f1672ff71eaa/volumes" Nov 29 07:30:42 crc kubenswrapper[4731]: I1129 07:30:42.647861 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" event={"ID":"94d1ff36-d633-4055-be35-a5c572c64f68","Type":"ContainerStarted","Data":"74366b7439769877c3867f11dbc7299168afff4e92256fb2cac307dc4140aee6"} Nov 29 07:30:42 crc kubenswrapper[4731]: I1129 07:30:42.648946 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:42 crc kubenswrapper[4731]: I1129 07:30:42.689252 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" podStartSLOduration=3.689223458 podStartE2EDuration="3.689223458s" podCreationTimestamp="2025-11-29 07:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:30:42.676910266 +0000 UTC m=+1481.567271369" watchObservedRunningTime="2025-11-29 07:30:42.689223458 +0000 UTC m=+1481.579584591" Nov 29 07:30:49 crc kubenswrapper[4731]: I1129 07:30:49.795224 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-k72xg" Nov 29 07:30:49 crc kubenswrapper[4731]: I1129 07:30:49.891922 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r54"] Nov 29 07:30:49 crc kubenswrapper[4731]: I1129 07:30:49.892277 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-j4r54" podUID="4d914548-9ec6-4746-acec-281a93e0aa8a" containerName="dnsmasq-dns" containerID="cri-o://c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9" gracePeriod=10 Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.409911 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.524707 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-sb\") pod \"4d914548-9ec6-4746-acec-281a93e0aa8a\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.524894 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-nb\") pod \"4d914548-9ec6-4746-acec-281a93e0aa8a\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.524940 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-swift-storage-0\") pod \"4d914548-9ec6-4746-acec-281a93e0aa8a\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.524987 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-openstack-edpm-ipam\") pod \"4d914548-9ec6-4746-acec-281a93e0aa8a\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.525027 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-config\") pod \"4d914548-9ec6-4746-acec-281a93e0aa8a\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.525374 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78kft\" (UniqueName: \"kubernetes.io/projected/4d914548-9ec6-4746-acec-281a93e0aa8a-kube-api-access-78kft\") pod \"4d914548-9ec6-4746-acec-281a93e0aa8a\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.525410 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-svc\") pod \"4d914548-9ec6-4746-acec-281a93e0aa8a\" (UID: \"4d914548-9ec6-4746-acec-281a93e0aa8a\") " Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.533643 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d914548-9ec6-4746-acec-281a93e0aa8a-kube-api-access-78kft" (OuterVolumeSpecName: "kube-api-access-78kft") pod "4d914548-9ec6-4746-acec-281a93e0aa8a" (UID: "4d914548-9ec6-4746-acec-281a93e0aa8a"). InnerVolumeSpecName "kube-api-access-78kft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.593910 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d914548-9ec6-4746-acec-281a93e0aa8a" (UID: "4d914548-9ec6-4746-acec-281a93e0aa8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.595147 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d914548-9ec6-4746-acec-281a93e0aa8a" (UID: "4d914548-9ec6-4746-acec-281a93e0aa8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.599040 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-config" (OuterVolumeSpecName: "config") pod "4d914548-9ec6-4746-acec-281a93e0aa8a" (UID: "4d914548-9ec6-4746-acec-281a93e0aa8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.599589 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4d914548-9ec6-4746-acec-281a93e0aa8a" (UID: "4d914548-9ec6-4746-acec-281a93e0aa8a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.603904 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4d914548-9ec6-4746-acec-281a93e0aa8a" (UID: "4d914548-9ec6-4746-acec-281a93e0aa8a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.606770 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d914548-9ec6-4746-acec-281a93e0aa8a" (UID: "4d914548-9ec6-4746-acec-281a93e0aa8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.628818 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.628863 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.628879 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.628892 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.628905 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.628917 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78kft\" (UniqueName: \"kubernetes.io/projected/4d914548-9ec6-4746-acec-281a93e0aa8a-kube-api-access-78kft\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.628933 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d914548-9ec6-4746-acec-281a93e0aa8a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.747441 4731 generic.go:334] "Generic (PLEG): container finished" podID="4d914548-9ec6-4746-acec-281a93e0aa8a" containerID="c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9" exitCode=0 Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.747509 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-j4r54" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.747520 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r54" event={"ID":"4d914548-9ec6-4746-acec-281a93e0aa8a","Type":"ContainerDied","Data":"c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9"} Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.748644 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r54" event={"ID":"4d914548-9ec6-4746-acec-281a93e0aa8a","Type":"ContainerDied","Data":"63ce8988387eb1cc1a46ff45e4a02f071b869961ca5cb107f19b0fb1046487e5"} Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.748675 4731 scope.go:117] "RemoveContainer" containerID="c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.796842 4731 scope.go:117] "RemoveContainer" containerID="f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.804865 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r54"] Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.818842 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r54"] Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.826241 4731 scope.go:117] "RemoveContainer" containerID="c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9" Nov 29 07:30:50 crc kubenswrapper[4731]: E1129 07:30:50.827151 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9\": container with ID starting with c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9 not found: ID does not exist" containerID="c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.827306 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9"} err="failed to get container status \"c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9\": rpc error: code = NotFound desc = could not find container \"c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9\": container with ID starting with c657aeab30fbcbd593168c578226b5f8e76714acffbdee77711035b8c805c8b9 not found: ID does not exist" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.827433 4731 scope.go:117] "RemoveContainer" containerID="f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c" Nov 29 07:30:50 crc kubenswrapper[4731]: E1129 07:30:50.827983 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c\": container with ID starting with f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c not found: ID does not exist" containerID="f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c" Nov 29 07:30:50 crc kubenswrapper[4731]: I1129 07:30:50.828083 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c"} err="failed to get container status \"f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c\": rpc error: code = NotFound desc = could not find container \"f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c\": container with ID starting with f1976ac124aab8ff98af3746c74e2210df86174412c9eababd47020853074c1c not found: ID does not exist" Nov 29 07:30:51 crc kubenswrapper[4731]: I1129 07:30:51.820825 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d914548-9ec6-4746-acec-281a93e0aa8a" path="/var/lib/kubelet/pods/4d914548-9ec6-4746-acec-281a93e0aa8a/volumes" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.311970 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw"] Nov 29 07:31:02 crc kubenswrapper[4731]: E1129 07:31:02.313079 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d914548-9ec6-4746-acec-281a93e0aa8a" containerName="dnsmasq-dns" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.313098 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d914548-9ec6-4746-acec-281a93e0aa8a" containerName="dnsmasq-dns" Nov 29 07:31:02 crc kubenswrapper[4731]: E1129 07:31:02.313121 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d914548-9ec6-4746-acec-281a93e0aa8a" containerName="init" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.313127 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d914548-9ec6-4746-acec-281a93e0aa8a" containerName="init" Nov 29 07:31:02 crc kubenswrapper[4731]: E1129 07:31:02.313136 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" containerName="dnsmasq-dns" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.313143 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" containerName="dnsmasq-dns" Nov 29 07:31:02 crc kubenswrapper[4731]: E1129 07:31:02.313155 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" containerName="init" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.313162 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" containerName="init" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.313371 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d914548-9ec6-4746-acec-281a93e0aa8a" containerName="dnsmasq-dns" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.313392 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5afc5eb-08df-4f82-b357-f1672ff71eaa" containerName="dnsmasq-dns" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.314434 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.316728 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.319117 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.319722 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.319921 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.327528 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw"] Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.433391 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggm2m\" (UniqueName: \"kubernetes.io/projected/98cb2e73-615e-483e-bd99-7a86354f29a0-kube-api-access-ggm2m\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.433469 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.433671 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.433723 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.536464 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.538004 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggm2m\" (UniqueName: \"kubernetes.io/projected/98cb2e73-615e-483e-bd99-7a86354f29a0-kube-api-access-ggm2m\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.538182 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.538460 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.543344 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.546804 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.554598 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.558559 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggm2m\" (UniqueName: \"kubernetes.io/projected/98cb2e73-615e-483e-bd99-7a86354f29a0-kube-api-access-ggm2m\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.634503 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.912323 4731 generic.go:334] "Generic (PLEG): container finished" podID="96683d18-3f61-486f-bc69-5a253f2538cc" containerID="f31a8020d3923d0c9fe18a1e20ab766506366c6916eca2501590190d4095d48b" exitCode=0 Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.912438 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96683d18-3f61-486f-bc69-5a253f2538cc","Type":"ContainerDied","Data":"f31a8020d3923d0c9fe18a1e20ab766506366c6916eca2501590190d4095d48b"} Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.916282 4731 generic.go:334] "Generic (PLEG): container finished" podID="b079dae9-1f5d-4057-ae41-4273aaabeab8" containerID="5ca84dfe602693f6e1b67640e482479f357375b13ad2b881d158c5033ec76464" exitCode=0 Nov 29 07:31:02 crc kubenswrapper[4731]: I1129 07:31:02.916356 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b079dae9-1f5d-4057-ae41-4273aaabeab8","Type":"ContainerDied","Data":"5ca84dfe602693f6e1b67640e482479f357375b13ad2b881d158c5033ec76464"} Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.002352 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.002428 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.002505 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.003702 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.003800 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" gracePeriod=600 Nov 29 07:31:03 crc kubenswrapper[4731]: E1129 07:31:03.156103 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.254281 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw"] Nov 29 07:31:03 crc kubenswrapper[4731]: W1129 07:31:03.259782 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98cb2e73_615e_483e_bd99_7a86354f29a0.slice/crio-d80dae19bdfbd91e277e6ccdeac31fd8e5090aaee0cbb821bf1b2ad93a2c26f2 WatchSource:0}: Error finding container d80dae19bdfbd91e277e6ccdeac31fd8e5090aaee0cbb821bf1b2ad93a2c26f2: Status 404 returned error can't find the container with id d80dae19bdfbd91e277e6ccdeac31fd8e5090aaee0cbb821bf1b2ad93a2c26f2 Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.965963 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" exitCode=0 Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.966097 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92"} Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.966465 4731 scope.go:117] "RemoveContainer" containerID="f21640b90c6a59e38b7b6b03ed6a9c7b8bee6bb7ce407b62721c202713562725" Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.967538 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:31:03 crc kubenswrapper[4731]: E1129 07:31:03.967863 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.973332 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96683d18-3f61-486f-bc69-5a253f2538cc","Type":"ContainerStarted","Data":"4d04999ac162e27af22e4dc3e989f1ccf813f990efce42c24133d723bee4f8d7"} Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.973682 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.977163 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" event={"ID":"98cb2e73-615e-483e-bd99-7a86354f29a0","Type":"ContainerStarted","Data":"d80dae19bdfbd91e277e6ccdeac31fd8e5090aaee0cbb821bf1b2ad93a2c26f2"} Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.980719 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b079dae9-1f5d-4057-ae41-4273aaabeab8","Type":"ContainerStarted","Data":"14256dfc771480e0ed8b6fc38649043e6292defe029dc1103947644bb7fdf3b8"} Nov 29 07:31:03 crc kubenswrapper[4731]: I1129 07:31:03.981438 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:31:04 crc kubenswrapper[4731]: I1129 07:31:04.023802 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.023771193 podStartE2EDuration="37.023771193s" podCreationTimestamp="2025-11-29 07:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:31:04.016246448 +0000 UTC m=+1502.906607571" watchObservedRunningTime="2025-11-29 07:31:04.023771193 +0000 UTC m=+1502.914132316" Nov 29 07:31:04 crc kubenswrapper[4731]: I1129 07:31:04.051487 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.051458144 podStartE2EDuration="38.051458144s" podCreationTimestamp="2025-11-29 07:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:31:04.051163685 +0000 UTC m=+1502.941524798" watchObservedRunningTime="2025-11-29 07:31:04.051458144 +0000 UTC m=+1502.941819247" Nov 29 07:31:10 crc kubenswrapper[4731]: I1129 07:31:10.040765 4731 scope.go:117] "RemoveContainer" containerID="406a1359ce3449e0fe1e4b20cd550dece14930b916b4660bf5490ea89ca993ee" Nov 29 07:31:16 crc kubenswrapper[4731]: I1129 07:31:16.807968 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:31:16 crc kubenswrapper[4731]: E1129 07:31:16.809997 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:31:16 crc kubenswrapper[4731]: I1129 07:31:16.917551 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="96683d18-3f61-486f-bc69-5a253f2538cc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.206:5671: connect: connection refused" Nov 29 07:31:17 crc kubenswrapper[4731]: I1129 07:31:17.796763 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b079dae9-1f5d-4057-ae41-4273aaabeab8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.207:5671: connect: connection refused" Nov 29 07:31:18 crc kubenswrapper[4731]: E1129 07:31:18.537202 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 29 07:31:18 crc kubenswrapper[4731]: E1129 07:31:18.537447 4731 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 29 07:31:18 crc kubenswrapper[4731]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 29 07:31:18 crc kubenswrapper[4731]: - hosts: all Nov 29 07:31:18 crc kubenswrapper[4731]: strategy: linear Nov 29 07:31:18 crc kubenswrapper[4731]: tasks: Nov 29 07:31:18 crc kubenswrapper[4731]: - name: Enable podified-repos Nov 29 07:31:18 crc kubenswrapper[4731]: become: true Nov 29 07:31:18 crc kubenswrapper[4731]: ansible.builtin.shell: | Nov 29 07:31:18 crc kubenswrapper[4731]: set -euxo pipefail Nov 29 07:31:18 crc kubenswrapper[4731]: pushd /var/tmp Nov 29 07:31:18 crc kubenswrapper[4731]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Nov 29 07:31:18 crc kubenswrapper[4731]: pushd repo-setup-main Nov 29 07:31:18 crc kubenswrapper[4731]: python3 -m venv ./venv Nov 29 07:31:18 crc kubenswrapper[4731]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Nov 29 07:31:18 crc kubenswrapper[4731]: ./venv/bin/repo-setup current-podified -b antelope Nov 29 07:31:18 crc kubenswrapper[4731]: popd Nov 29 07:31:18 crc kubenswrapper[4731]: rm -rf repo-setup-main Nov 29 07:31:18 crc kubenswrapper[4731]: Nov 29 07:31:18 crc kubenswrapper[4731]: Nov 29 07:31:18 crc kubenswrapper[4731]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 29 07:31:18 crc kubenswrapper[4731]: edpm_override_hosts: openstack-edpm-ipam Nov 29 07:31:18 crc kubenswrapper[4731]: edpm_service_type: repo-setup Nov 29 07:31:18 crc kubenswrapper[4731]: Nov 29 07:31:18 crc kubenswrapper[4731]: Nov 29 07:31:18 crc kubenswrapper[4731]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggm2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw_openstack(98cb2e73-615e-483e-bd99-7a86354f29a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 29 07:31:18 crc kubenswrapper[4731]: > logger="UnhandledError" Nov 29 07:31:18 crc kubenswrapper[4731]: E1129 07:31:18.538938 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" podUID="98cb2e73-615e-483e-bd99-7a86354f29a0" Nov 29 07:31:18 crc kubenswrapper[4731]: I1129 07:31:18.543962 4731 scope.go:117] "RemoveContainer" containerID="61ae7030999f03540f528908dd08546c609eeb2204787a92413b7adeb226981d" Nov 29 07:31:19 crc kubenswrapper[4731]: E1129 07:31:19.186018 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" podUID="98cb2e73-615e-483e-bd99-7a86354f29a0" Nov 29 07:31:26 crc kubenswrapper[4731]: I1129 07:31:26.916925 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 07:31:27 crc kubenswrapper[4731]: I1129 07:31:27.794429 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:31:31 crc kubenswrapper[4731]: I1129 07:31:31.819779 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:31:31 crc kubenswrapper[4731]: E1129 07:31:31.820403 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:31:35 crc kubenswrapper[4731]: I1129 07:31:35.382269 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" event={"ID":"98cb2e73-615e-483e-bd99-7a86354f29a0","Type":"ContainerStarted","Data":"8689744152dbdf54892d1438139b427d3d109a20ec40c11cb169fac51f366b9d"} Nov 29 07:31:35 crc kubenswrapper[4731]: I1129 07:31:35.407770 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" podStartSLOduration=1.700831585 podStartE2EDuration="33.407747795s" podCreationTimestamp="2025-11-29 07:31:02 +0000 UTC" firstStartedPulling="2025-11-29 07:31:03.262482629 +0000 UTC m=+1502.152843732" lastFinishedPulling="2025-11-29 07:31:34.969398829 +0000 UTC m=+1533.859759942" observedRunningTime="2025-11-29 07:31:35.40232915 +0000 UTC m=+1534.292690273" watchObservedRunningTime="2025-11-29 07:31:35.407747795 +0000 UTC m=+1534.298108898" Nov 29 07:31:44 crc kubenswrapper[4731]: I1129 07:31:44.807440 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:31:44 crc kubenswrapper[4731]: E1129 07:31:44.808315 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:31:48 crc kubenswrapper[4731]: I1129 07:31:48.597138 4731 generic.go:334] "Generic (PLEG): container finished" podID="98cb2e73-615e-483e-bd99-7a86354f29a0" containerID="8689744152dbdf54892d1438139b427d3d109a20ec40c11cb169fac51f366b9d" exitCode=0 Nov 29 07:31:48 crc kubenswrapper[4731]: I1129 07:31:48.597347 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" event={"ID":"98cb2e73-615e-483e-bd99-7a86354f29a0","Type":"ContainerDied","Data":"8689744152dbdf54892d1438139b427d3d109a20ec40c11cb169fac51f366b9d"} Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.018046 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.127341 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-inventory\") pod \"98cb2e73-615e-483e-bd99-7a86354f29a0\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.127531 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggm2m\" (UniqueName: \"kubernetes.io/projected/98cb2e73-615e-483e-bd99-7a86354f29a0-kube-api-access-ggm2m\") pod \"98cb2e73-615e-483e-bd99-7a86354f29a0\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.127756 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-repo-setup-combined-ca-bundle\") pod \"98cb2e73-615e-483e-bd99-7a86354f29a0\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.127807 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-ssh-key\") pod \"98cb2e73-615e-483e-bd99-7a86354f29a0\" (UID: \"98cb2e73-615e-483e-bd99-7a86354f29a0\") " Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.133860 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98cb2e73-615e-483e-bd99-7a86354f29a0-kube-api-access-ggm2m" (OuterVolumeSpecName: "kube-api-access-ggm2m") pod "98cb2e73-615e-483e-bd99-7a86354f29a0" (UID: "98cb2e73-615e-483e-bd99-7a86354f29a0"). InnerVolumeSpecName "kube-api-access-ggm2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.135853 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "98cb2e73-615e-483e-bd99-7a86354f29a0" (UID: "98cb2e73-615e-483e-bd99-7a86354f29a0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.158323 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "98cb2e73-615e-483e-bd99-7a86354f29a0" (UID: "98cb2e73-615e-483e-bd99-7a86354f29a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.160604 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-inventory" (OuterVolumeSpecName: "inventory") pod "98cb2e73-615e-483e-bd99-7a86354f29a0" (UID: "98cb2e73-615e-483e-bd99-7a86354f29a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.231304 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.231329 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggm2m\" (UniqueName: \"kubernetes.io/projected/98cb2e73-615e-483e-bd99-7a86354f29a0-kube-api-access-ggm2m\") on node \"crc\" DevicePath \"\"" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.231339 4731 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.231350 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98cb2e73-615e-483e-bd99-7a86354f29a0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.622032 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" event={"ID":"98cb2e73-615e-483e-bd99-7a86354f29a0","Type":"ContainerDied","Data":"d80dae19bdfbd91e277e6ccdeac31fd8e5090aaee0cbb821bf1b2ad93a2c26f2"} Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.622085 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80dae19bdfbd91e277e6ccdeac31fd8e5090aaee0cbb821bf1b2ad93a2c26f2" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.622099 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.717006 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs"] Nov 29 07:31:50 crc kubenswrapper[4731]: E1129 07:31:50.717524 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98cb2e73-615e-483e-bd99-7a86354f29a0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.717546 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="98cb2e73-615e-483e-bd99-7a86354f29a0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.717794 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="98cb2e73-615e-483e-bd99-7a86354f29a0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.718603 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.752654 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.752797 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.752793 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.757953 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.770363 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs"] Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.857293 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.857433 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvlm\" (UniqueName: \"kubernetes.io/projected/6552a695-5be9-443d-a962-95ac029df99a-kube-api-access-zxvlm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.857612 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.959804 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.959929 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.960664 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxvlm\" (UniqueName: \"kubernetes.io/projected/6552a695-5be9-443d-a962-95ac029df99a-kube-api-access-zxvlm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.963987 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.965193 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:50 crc kubenswrapper[4731]: I1129 07:31:50.979195 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxvlm\" (UniqueName: \"kubernetes.io/projected/6552a695-5be9-443d-a962-95ac029df99a-kube-api-access-zxvlm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2tjcs\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:51 crc kubenswrapper[4731]: I1129 07:31:51.077528 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:51 crc kubenswrapper[4731]: I1129 07:31:51.633452 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs"] Nov 29 07:31:52 crc kubenswrapper[4731]: I1129 07:31:52.643411 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" event={"ID":"6552a695-5be9-443d-a962-95ac029df99a","Type":"ContainerStarted","Data":"13542d6b831dca24d27d156c7e0a97448866906c57c9d5d4fa489c9033971ad2"} Nov 29 07:31:53 crc kubenswrapper[4731]: I1129 07:31:53.654505 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" event={"ID":"6552a695-5be9-443d-a962-95ac029df99a","Type":"ContainerStarted","Data":"976a7dac7ba19cd0b8c4a9fca0ab84ecbdb8041374251ec68980cbd581806166"} Nov 29 07:31:53 crc kubenswrapper[4731]: I1129 07:31:53.681628 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" podStartSLOduration=2.720647719 podStartE2EDuration="3.681604387s" podCreationTimestamp="2025-11-29 07:31:50 +0000 UTC" firstStartedPulling="2025-11-29 07:31:51.635202092 +0000 UTC m=+1550.525563195" lastFinishedPulling="2025-11-29 07:31:52.59615876 +0000 UTC m=+1551.486519863" observedRunningTime="2025-11-29 07:31:53.674181825 +0000 UTC m=+1552.564542928" watchObservedRunningTime="2025-11-29 07:31:53.681604387 +0000 UTC m=+1552.571965490" Nov 29 07:31:55 crc kubenswrapper[4731]: I1129 07:31:55.680062 4731 generic.go:334] "Generic (PLEG): container finished" podID="6552a695-5be9-443d-a962-95ac029df99a" containerID="976a7dac7ba19cd0b8c4a9fca0ab84ecbdb8041374251ec68980cbd581806166" exitCode=0 Nov 29 07:31:55 crc kubenswrapper[4731]: I1129 07:31:55.680544 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" event={"ID":"6552a695-5be9-443d-a962-95ac029df99a","Type":"ContainerDied","Data":"976a7dac7ba19cd0b8c4a9fca0ab84ecbdb8041374251ec68980cbd581806166"} Nov 29 07:31:55 crc kubenswrapper[4731]: I1129 07:31:55.807563 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:31:55 crc kubenswrapper[4731]: E1129 07:31:55.807898 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.076154 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.199216 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-ssh-key\") pod \"6552a695-5be9-443d-a962-95ac029df99a\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.199513 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-inventory\") pod \"6552a695-5be9-443d-a962-95ac029df99a\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.199707 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxvlm\" (UniqueName: \"kubernetes.io/projected/6552a695-5be9-443d-a962-95ac029df99a-kube-api-access-zxvlm\") pod \"6552a695-5be9-443d-a962-95ac029df99a\" (UID: \"6552a695-5be9-443d-a962-95ac029df99a\") " Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.205107 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6552a695-5be9-443d-a962-95ac029df99a-kube-api-access-zxvlm" (OuterVolumeSpecName: "kube-api-access-zxvlm") pod "6552a695-5be9-443d-a962-95ac029df99a" (UID: "6552a695-5be9-443d-a962-95ac029df99a"). InnerVolumeSpecName "kube-api-access-zxvlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.234847 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-inventory" (OuterVolumeSpecName: "inventory") pod "6552a695-5be9-443d-a962-95ac029df99a" (UID: "6552a695-5be9-443d-a962-95ac029df99a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.238179 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6552a695-5be9-443d-a962-95ac029df99a" (UID: "6552a695-5be9-443d-a962-95ac029df99a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.302679 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.302728 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxvlm\" (UniqueName: \"kubernetes.io/projected/6552a695-5be9-443d-a962-95ac029df99a-kube-api-access-zxvlm\") on node \"crc\" DevicePath \"\"" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.302744 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6552a695-5be9-443d-a962-95ac029df99a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.701725 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" event={"ID":"6552a695-5be9-443d-a962-95ac029df99a","Type":"ContainerDied","Data":"13542d6b831dca24d27d156c7e0a97448866906c57c9d5d4fa489c9033971ad2"} Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.701782 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13542d6b831dca24d27d156c7e0a97448866906c57c9d5d4fa489c9033971ad2" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.701873 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2tjcs" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.791612 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj"] Nov 29 07:31:57 crc kubenswrapper[4731]: E1129 07:31:57.792149 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6552a695-5be9-443d-a962-95ac029df99a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.792174 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6552a695-5be9-443d-a962-95ac029df99a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.792414 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6552a695-5be9-443d-a962-95ac029df99a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.793201 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.795273 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.796083 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.797274 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.799212 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.829590 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj"] Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.917957 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.918113 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.918271 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dz4\" (UniqueName: \"kubernetes.io/projected/20126f8e-6e2a-4035-862f-ab9c789511a0-kube-api-access-z8dz4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:57 crc kubenswrapper[4731]: I1129 07:31:57.918304 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.020121 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.020712 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.020845 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8dz4\" (UniqueName: \"kubernetes.io/projected/20126f8e-6e2a-4035-862f-ab9c789511a0-kube-api-access-z8dz4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.020885 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.025462 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.025478 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.027299 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.042062 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8dz4\" (UniqueName: \"kubernetes.io/projected/20126f8e-6e2a-4035-862f-ab9c789511a0-kube-api-access-z8dz4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.117782 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.692057 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj"] Nov 29 07:31:58 crc kubenswrapper[4731]: I1129 07:31:58.715828 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" event={"ID":"20126f8e-6e2a-4035-862f-ab9c789511a0","Type":"ContainerStarted","Data":"f086913eced082ec3318cfc9c9e6a34321c8ceae14a2988e6f38fabd26460a02"} Nov 29 07:31:59 crc kubenswrapper[4731]: I1129 07:31:59.732539 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" event={"ID":"20126f8e-6e2a-4035-862f-ab9c789511a0","Type":"ContainerStarted","Data":"20f9cc504c2a68fa2d1bd29e4405da4bdaeaaa5c78cd07f0ea55403b2b35f8f8"} Nov 29 07:31:59 crc kubenswrapper[4731]: I1129 07:31:59.768123 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" podStartSLOduration=2.296606676 podStartE2EDuration="2.768098818s" podCreationTimestamp="2025-11-29 07:31:57 +0000 UTC" firstStartedPulling="2025-11-29 07:31:58.69971847 +0000 UTC m=+1557.590079573" lastFinishedPulling="2025-11-29 07:31:59.171210612 +0000 UTC m=+1558.061571715" observedRunningTime="2025-11-29 07:31:59.76184389 +0000 UTC m=+1558.652204993" watchObservedRunningTime="2025-11-29 07:31:59.768098818 +0000 UTC m=+1558.658459921" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.536158 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwmc"] Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.539305 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.547245 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwmc"] Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.613099 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-catalog-content\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.613796 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptkhv\" (UniqueName: \"kubernetes.io/projected/2e018246-ee22-4597-99f6-ffa1acd588ba-kube-api-access-ptkhv\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.613852 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-utilities\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.716273 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptkhv\" (UniqueName: \"kubernetes.io/projected/2e018246-ee22-4597-99f6-ffa1acd588ba-kube-api-access-ptkhv\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.716350 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-utilities\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.716496 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-catalog-content\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.717260 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-catalog-content\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.717345 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-utilities\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.737858 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptkhv\" (UniqueName: \"kubernetes.io/projected/2e018246-ee22-4597-99f6-ffa1acd588ba-kube-api-access-ptkhv\") pod \"redhat-marketplace-pqwmc\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:01 crc kubenswrapper[4731]: I1129 07:32:01.860345 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:02 crc kubenswrapper[4731]: I1129 07:32:02.388209 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwmc"] Nov 29 07:32:02 crc kubenswrapper[4731]: I1129 07:32:02.766940 4731 generic.go:334] "Generic (PLEG): container finished" podID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerID="51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a" exitCode=0 Nov 29 07:32:02 crc kubenswrapper[4731]: I1129 07:32:02.766998 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwmc" event={"ID":"2e018246-ee22-4597-99f6-ffa1acd588ba","Type":"ContainerDied","Data":"51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a"} Nov 29 07:32:02 crc kubenswrapper[4731]: I1129 07:32:02.767257 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwmc" event={"ID":"2e018246-ee22-4597-99f6-ffa1acd588ba","Type":"ContainerStarted","Data":"1b3587dd5b6a2de79a5751ab71d3ef8945a998ea58fbee9751eee38dbf58127a"} Nov 29 07:32:03 crc kubenswrapper[4731]: I1129 07:32:03.782483 4731 generic.go:334] "Generic (PLEG): container finished" podID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerID="cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d" exitCode=0 Nov 29 07:32:03 crc kubenswrapper[4731]: I1129 07:32:03.784672 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwmc" event={"ID":"2e018246-ee22-4597-99f6-ffa1acd588ba","Type":"ContainerDied","Data":"cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d"} Nov 29 07:32:05 crc kubenswrapper[4731]: I1129 07:32:05.804730 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwmc" event={"ID":"2e018246-ee22-4597-99f6-ffa1acd588ba","Type":"ContainerStarted","Data":"c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b"} Nov 29 07:32:05 crc kubenswrapper[4731]: I1129 07:32:05.839129 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pqwmc" podStartSLOduration=3.18529794 podStartE2EDuration="4.839097568s" podCreationTimestamp="2025-11-29 07:32:01 +0000 UTC" firstStartedPulling="2025-11-29 07:32:02.769641767 +0000 UTC m=+1561.660002870" lastFinishedPulling="2025-11-29 07:32:04.423441395 +0000 UTC m=+1563.313802498" observedRunningTime="2025-11-29 07:32:05.828029552 +0000 UTC m=+1564.718390655" watchObservedRunningTime="2025-11-29 07:32:05.839097568 +0000 UTC m=+1564.729458671" Nov 29 07:32:06 crc kubenswrapper[4731]: I1129 07:32:06.806951 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:32:06 crc kubenswrapper[4731]: E1129 07:32:06.807310 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:32:11 crc kubenswrapper[4731]: I1129 07:32:11.861436 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:11 crc kubenswrapper[4731]: I1129 07:32:11.862058 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:11 crc kubenswrapper[4731]: I1129 07:32:11.924865 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:11 crc kubenswrapper[4731]: I1129 07:32:11.979643 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:12 crc kubenswrapper[4731]: I1129 07:32:12.170613 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwmc"] Nov 29 07:32:13 crc kubenswrapper[4731]: I1129 07:32:13.896553 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pqwmc" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerName="registry-server" containerID="cri-o://c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b" gracePeriod=2 Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.409756 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.520486 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkhv\" (UniqueName: \"kubernetes.io/projected/2e018246-ee22-4597-99f6-ffa1acd588ba-kube-api-access-ptkhv\") pod \"2e018246-ee22-4597-99f6-ffa1acd588ba\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.520767 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-catalog-content\") pod \"2e018246-ee22-4597-99f6-ffa1acd588ba\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.520987 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-utilities\") pod \"2e018246-ee22-4597-99f6-ffa1acd588ba\" (UID: \"2e018246-ee22-4597-99f6-ffa1acd588ba\") " Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.521689 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-utilities" (OuterVolumeSpecName: "utilities") pod "2e018246-ee22-4597-99f6-ffa1acd588ba" (UID: "2e018246-ee22-4597-99f6-ffa1acd588ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.526534 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e018246-ee22-4597-99f6-ffa1acd588ba-kube-api-access-ptkhv" (OuterVolumeSpecName: "kube-api-access-ptkhv") pod "2e018246-ee22-4597-99f6-ffa1acd588ba" (UID: "2e018246-ee22-4597-99f6-ffa1acd588ba"). InnerVolumeSpecName "kube-api-access-ptkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.546907 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e018246-ee22-4597-99f6-ffa1acd588ba" (UID: "2e018246-ee22-4597-99f6-ffa1acd588ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.623952 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.623999 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptkhv\" (UniqueName: \"kubernetes.io/projected/2e018246-ee22-4597-99f6-ffa1acd588ba-kube-api-access-ptkhv\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.624015 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e018246-ee22-4597-99f6-ffa1acd588ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.909316 4731 generic.go:334] "Generic (PLEG): container finished" podID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerID="c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b" exitCode=0 Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.909390 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqwmc" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.909411 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwmc" event={"ID":"2e018246-ee22-4597-99f6-ffa1acd588ba","Type":"ContainerDied","Data":"c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b"} Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.910203 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwmc" event={"ID":"2e018246-ee22-4597-99f6-ffa1acd588ba","Type":"ContainerDied","Data":"1b3587dd5b6a2de79a5751ab71d3ef8945a998ea58fbee9751eee38dbf58127a"} Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.910235 4731 scope.go:117] "RemoveContainer" containerID="c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.932956 4731 scope.go:117] "RemoveContainer" containerID="cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d" Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.951131 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwmc"] Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.962538 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwmc"] Nov 29 07:32:14 crc kubenswrapper[4731]: I1129 07:32:14.964133 4731 scope.go:117] "RemoveContainer" containerID="51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a" Nov 29 07:32:15 crc kubenswrapper[4731]: I1129 07:32:15.016747 4731 scope.go:117] "RemoveContainer" containerID="c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b" Nov 29 07:32:15 crc kubenswrapper[4731]: E1129 07:32:15.017359 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b\": container with ID starting with c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b not found: ID does not exist" containerID="c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b" Nov 29 07:32:15 crc kubenswrapper[4731]: I1129 07:32:15.017398 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b"} err="failed to get container status \"c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b\": rpc error: code = NotFound desc = could not find container \"c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b\": container with ID starting with c026df7635e1964c33ce1c06645a8f1a76a394cd4f512973bf24691a0965950b not found: ID does not exist" Nov 29 07:32:15 crc kubenswrapper[4731]: I1129 07:32:15.017430 4731 scope.go:117] "RemoveContainer" containerID="cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d" Nov 29 07:32:15 crc kubenswrapper[4731]: E1129 07:32:15.017848 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d\": container with ID starting with cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d not found: ID does not exist" containerID="cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d" Nov 29 07:32:15 crc kubenswrapper[4731]: I1129 07:32:15.017894 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d"} err="failed to get container status \"cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d\": rpc error: code = NotFound desc = could not find container \"cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d\": container with ID starting with cb4dbb749a456231836a8e55554ab3f25320c65364f9ab355128f97abedbdf1d not found: ID does not exist" Nov 29 07:32:15 crc kubenswrapper[4731]: I1129 07:32:15.017926 4731 scope.go:117] "RemoveContainer" containerID="51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a" Nov 29 07:32:15 crc kubenswrapper[4731]: E1129 07:32:15.019054 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a\": container with ID starting with 51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a not found: ID does not exist" containerID="51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a" Nov 29 07:32:15 crc kubenswrapper[4731]: I1129 07:32:15.019101 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a"} err="failed to get container status \"51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a\": rpc error: code = NotFound desc = could not find container \"51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a\": container with ID starting with 51301457e87fc3267fb9debf8a6acaf8fdc0b6465767ce320aebfa8ed9c55d4a not found: ID does not exist" Nov 29 07:32:15 crc kubenswrapper[4731]: I1129 07:32:15.818934 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" path="/var/lib/kubelet/pods/2e018246-ee22-4597-99f6-ffa1acd588ba/volumes" Nov 29 07:32:18 crc kubenswrapper[4731]: I1129 07:32:18.708329 4731 scope.go:117] "RemoveContainer" containerID="d6c338e540d22684df8ae1e7ddc644d39bff3e11e8f01edf5d0aca9da74af4e0" Nov 29 07:32:18 crc kubenswrapper[4731]: I1129 07:32:18.809851 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:32:18 crc kubenswrapper[4731]: E1129 07:32:18.810297 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:32:33 crc kubenswrapper[4731]: I1129 07:32:33.808671 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:32:33 crc kubenswrapper[4731]: E1129 07:32:33.809525 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:32:47 crc kubenswrapper[4731]: I1129 07:32:47.807793 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:32:47 crc kubenswrapper[4731]: E1129 07:32:47.808603 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:32:59 crc kubenswrapper[4731]: I1129 07:32:59.807320 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:32:59 crc kubenswrapper[4731]: E1129 07:32:59.808082 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:33:10 crc kubenswrapper[4731]: I1129 07:33:10.806840 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:33:10 crc kubenswrapper[4731]: E1129 07:33:10.807633 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.461042 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pfqq4"] Nov 29 07:33:11 crc kubenswrapper[4731]: E1129 07:33:11.461650 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerName="registry-server" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.461671 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerName="registry-server" Nov 29 07:33:11 crc kubenswrapper[4731]: E1129 07:33:11.461691 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerName="extract-content" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.461699 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerName="extract-content" Nov 29 07:33:11 crc kubenswrapper[4731]: E1129 07:33:11.461745 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerName="extract-utilities" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.461755 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerName="extract-utilities" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.462025 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e018246-ee22-4597-99f6-ffa1acd588ba" containerName="registry-server" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.463875 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.516600 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pfqq4"] Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.564703 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-catalog-content\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.564779 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-utilities\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.564905 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn656\" (UniqueName: \"kubernetes.io/projected/ab7f2af7-3476-4c3c-8c42-463a69eda838-kube-api-access-pn656\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.666923 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn656\" (UniqueName: \"kubernetes.io/projected/ab7f2af7-3476-4c3c-8c42-463a69eda838-kube-api-access-pn656\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.667049 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-catalog-content\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.667069 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-utilities\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.667678 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-utilities\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.668000 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-catalog-content\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.699857 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn656\" (UniqueName: \"kubernetes.io/projected/ab7f2af7-3476-4c3c-8c42-463a69eda838-kube-api-access-pn656\") pod \"certified-operators-pfqq4\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:11 crc kubenswrapper[4731]: I1129 07:33:11.829094 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:12 crc kubenswrapper[4731]: I1129 07:33:12.461673 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pfqq4"] Nov 29 07:33:12 crc kubenswrapper[4731]: I1129 07:33:12.566763 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfqq4" event={"ID":"ab7f2af7-3476-4c3c-8c42-463a69eda838","Type":"ContainerStarted","Data":"d17867f641c7bc7033a4f608ea49af900d72e68e63d98ee5877f6c550b076a57"} Nov 29 07:33:13 crc kubenswrapper[4731]: I1129 07:33:13.582852 4731 generic.go:334] "Generic (PLEG): container finished" podID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerID="8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899" exitCode=0 Nov 29 07:33:13 crc kubenswrapper[4731]: I1129 07:33:13.583955 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfqq4" event={"ID":"ab7f2af7-3476-4c3c-8c42-463a69eda838","Type":"ContainerDied","Data":"8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899"} Nov 29 07:33:14 crc kubenswrapper[4731]: I1129 07:33:14.605715 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfqq4" event={"ID":"ab7f2af7-3476-4c3c-8c42-463a69eda838","Type":"ContainerStarted","Data":"d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143"} Nov 29 07:33:15 crc kubenswrapper[4731]: I1129 07:33:15.619364 4731 generic.go:334] "Generic (PLEG): container finished" podID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerID="d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143" exitCode=0 Nov 29 07:33:15 crc kubenswrapper[4731]: I1129 07:33:15.619467 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfqq4" event={"ID":"ab7f2af7-3476-4c3c-8c42-463a69eda838","Type":"ContainerDied","Data":"d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143"} Nov 29 07:33:16 crc kubenswrapper[4731]: I1129 07:33:16.635499 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfqq4" event={"ID":"ab7f2af7-3476-4c3c-8c42-463a69eda838","Type":"ContainerStarted","Data":"ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413"} Nov 29 07:33:16 crc kubenswrapper[4731]: I1129 07:33:16.669440 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pfqq4" podStartSLOduration=3.218273456 podStartE2EDuration="5.669419203s" podCreationTimestamp="2025-11-29 07:33:11 +0000 UTC" firstStartedPulling="2025-11-29 07:33:13.586369282 +0000 UTC m=+1632.476730385" lastFinishedPulling="2025-11-29 07:33:16.037515029 +0000 UTC m=+1634.927876132" observedRunningTime="2025-11-29 07:33:16.665602333 +0000 UTC m=+1635.555963436" watchObservedRunningTime="2025-11-29 07:33:16.669419203 +0000 UTC m=+1635.559780306" Nov 29 07:33:21 crc kubenswrapper[4731]: I1129 07:33:21.817757 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:33:21 crc kubenswrapper[4731]: E1129 07:33:21.818742 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:33:21 crc kubenswrapper[4731]: I1129 07:33:21.829753 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:21 crc kubenswrapper[4731]: I1129 07:33:21.829815 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:21 crc kubenswrapper[4731]: I1129 07:33:21.887379 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:22 crc kubenswrapper[4731]: I1129 07:33:22.789917 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:22 crc kubenswrapper[4731]: I1129 07:33:22.846030 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pfqq4"] Nov 29 07:33:24 crc kubenswrapper[4731]: I1129 07:33:24.732683 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pfqq4" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerName="registry-server" containerID="cri-o://ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413" gracePeriod=2 Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.270872 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.405959 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-utilities\") pod \"ab7f2af7-3476-4c3c-8c42-463a69eda838\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.406167 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-catalog-content\") pod \"ab7f2af7-3476-4c3c-8c42-463a69eda838\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.406412 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn656\" (UniqueName: \"kubernetes.io/projected/ab7f2af7-3476-4c3c-8c42-463a69eda838-kube-api-access-pn656\") pod \"ab7f2af7-3476-4c3c-8c42-463a69eda838\" (UID: \"ab7f2af7-3476-4c3c-8c42-463a69eda838\") " Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.407392 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-utilities" (OuterVolumeSpecName: "utilities") pod "ab7f2af7-3476-4c3c-8c42-463a69eda838" (UID: "ab7f2af7-3476-4c3c-8c42-463a69eda838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.424126 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7f2af7-3476-4c3c-8c42-463a69eda838-kube-api-access-pn656" (OuterVolumeSpecName: "kube-api-access-pn656") pod "ab7f2af7-3476-4c3c-8c42-463a69eda838" (UID: "ab7f2af7-3476-4c3c-8c42-463a69eda838"). InnerVolumeSpecName "kube-api-access-pn656". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.460870 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab7f2af7-3476-4c3c-8c42-463a69eda838" (UID: "ab7f2af7-3476-4c3c-8c42-463a69eda838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.510038 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.510095 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7f2af7-3476-4c3c-8c42-463a69eda838-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.510111 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn656\" (UniqueName: \"kubernetes.io/projected/ab7f2af7-3476-4c3c-8c42-463a69eda838-kube-api-access-pn656\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.745024 4731 generic.go:334] "Generic (PLEG): container finished" podID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerID="ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413" exitCode=0 Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.745086 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfqq4" event={"ID":"ab7f2af7-3476-4c3c-8c42-463a69eda838","Type":"ContainerDied","Data":"ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413"} Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.745128 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfqq4" event={"ID":"ab7f2af7-3476-4c3c-8c42-463a69eda838","Type":"ContainerDied","Data":"d17867f641c7bc7033a4f608ea49af900d72e68e63d98ee5877f6c550b076a57"} Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.745150 4731 scope.go:117] "RemoveContainer" containerID="ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.745610 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfqq4" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.778584 4731 scope.go:117] "RemoveContainer" containerID="d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.801487 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pfqq4"] Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.816987 4731 scope.go:117] "RemoveContainer" containerID="8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.830530 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pfqq4"] Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.859192 4731 scope.go:117] "RemoveContainer" containerID="ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413" Nov 29 07:33:25 crc kubenswrapper[4731]: E1129 07:33:25.859958 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413\": container with ID starting with ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413 not found: ID does not exist" containerID="ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.860020 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413"} err="failed to get container status \"ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413\": rpc error: code = NotFound desc = could not find container \"ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413\": container with ID starting with ba91fb3da610e73125d5561e05d3391eb3ea6cd9030635aec5f2955ae9b0f413 not found: ID does not exist" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.860056 4731 scope.go:117] "RemoveContainer" containerID="d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143" Nov 29 07:33:25 crc kubenswrapper[4731]: E1129 07:33:25.860409 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143\": container with ID starting with d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143 not found: ID does not exist" containerID="d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.860447 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143"} err="failed to get container status \"d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143\": rpc error: code = NotFound desc = could not find container \"d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143\": container with ID starting with d4766273d04f35c95521454fc6fba8dd1089a7984a77afe04c0d106f92b43143 not found: ID does not exist" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.860470 4731 scope.go:117] "RemoveContainer" containerID="8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899" Nov 29 07:33:25 crc kubenswrapper[4731]: E1129 07:33:25.860779 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899\": container with ID starting with 8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899 not found: ID does not exist" containerID="8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899" Nov 29 07:33:25 crc kubenswrapper[4731]: I1129 07:33:25.860817 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899"} err="failed to get container status \"8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899\": rpc error: code = NotFound desc = could not find container \"8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899\": container with ID starting with 8fb6c1a339d45506eca7c6e32bb17bc73f0cfb1db3fdb587c8c0430adb65b899 not found: ID does not exist" Nov 29 07:33:27 crc kubenswrapper[4731]: I1129 07:33:27.820668 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" path="/var/lib/kubelet/pods/ab7f2af7-3476-4c3c-8c42-463a69eda838/volumes" Nov 29 07:33:32 crc kubenswrapper[4731]: I1129 07:33:32.808130 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:33:32 crc kubenswrapper[4731]: E1129 07:33:32.808861 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:33:44 crc kubenswrapper[4731]: I1129 07:33:44.806719 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:33:44 crc kubenswrapper[4731]: E1129 07:33:44.807525 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:33:57 crc kubenswrapper[4731]: I1129 07:33:57.807302 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:33:57 crc kubenswrapper[4731]: E1129 07:33:57.808285 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:34:12 crc kubenswrapper[4731]: I1129 07:34:12.807609 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:34:12 crc kubenswrapper[4731]: E1129 07:34:12.808408 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.137814 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxjrz"] Nov 29 07:34:13 crc kubenswrapper[4731]: E1129 07:34:13.138681 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerName="registry-server" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.138705 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerName="registry-server" Nov 29 07:34:13 crc kubenswrapper[4731]: E1129 07:34:13.138736 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerName="extract-content" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.138745 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerName="extract-content" Nov 29 07:34:13 crc kubenswrapper[4731]: E1129 07:34:13.138771 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerName="extract-utilities" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.138780 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerName="extract-utilities" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.139264 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7f2af7-3476-4c3c-8c42-463a69eda838" containerName="registry-server" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.141550 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.156516 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxjrz"] Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.170126 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ddsl\" (UniqueName: \"kubernetes.io/projected/7bd434ad-bd66-49a3-97b8-2954b0e93c27-kube-api-access-4ddsl\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.170210 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-utilities\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.170269 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-catalog-content\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.272620 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-catalog-content\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.272806 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ddsl\" (UniqueName: \"kubernetes.io/projected/7bd434ad-bd66-49a3-97b8-2954b0e93c27-kube-api-access-4ddsl\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.272861 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-utilities\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.273302 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-catalog-content\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.273333 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-utilities\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.293131 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ddsl\" (UniqueName: \"kubernetes.io/projected/7bd434ad-bd66-49a3-97b8-2954b0e93c27-kube-api-access-4ddsl\") pod \"community-operators-lxjrz\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:13 crc kubenswrapper[4731]: I1129 07:34:13.500064 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:14 crc kubenswrapper[4731]: I1129 07:34:14.039005 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxjrz"] Nov 29 07:34:14 crc kubenswrapper[4731]: I1129 07:34:14.254238 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxjrz" event={"ID":"7bd434ad-bd66-49a3-97b8-2954b0e93c27","Type":"ContainerStarted","Data":"c668e9200f505d1ba93d32a2bdf51ee199f0b433d9ae02077229d1d1be401a1c"} Nov 29 07:34:15 crc kubenswrapper[4731]: I1129 07:34:15.266640 4731 generic.go:334] "Generic (PLEG): container finished" podID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerID="b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb" exitCode=0 Nov 29 07:34:15 crc kubenswrapper[4731]: I1129 07:34:15.266742 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxjrz" event={"ID":"7bd434ad-bd66-49a3-97b8-2954b0e93c27","Type":"ContainerDied","Data":"b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb"} Nov 29 07:34:16 crc kubenswrapper[4731]: I1129 07:34:16.277774 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxjrz" event={"ID":"7bd434ad-bd66-49a3-97b8-2954b0e93c27","Type":"ContainerStarted","Data":"a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348"} Nov 29 07:34:17 crc kubenswrapper[4731]: I1129 07:34:17.322585 4731 generic.go:334] "Generic (PLEG): container finished" podID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerID="a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348" exitCode=0 Nov 29 07:34:17 crc kubenswrapper[4731]: I1129 07:34:17.322824 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxjrz" event={"ID":"7bd434ad-bd66-49a3-97b8-2954b0e93c27","Type":"ContainerDied","Data":"a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348"} Nov 29 07:34:19 crc kubenswrapper[4731]: I1129 07:34:19.344550 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxjrz" event={"ID":"7bd434ad-bd66-49a3-97b8-2954b0e93c27","Type":"ContainerStarted","Data":"c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620"} Nov 29 07:34:19 crc kubenswrapper[4731]: I1129 07:34:19.373369 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxjrz" podStartSLOduration=3.01825282 podStartE2EDuration="6.373345501s" podCreationTimestamp="2025-11-29 07:34:13 +0000 UTC" firstStartedPulling="2025-11-29 07:34:15.269849231 +0000 UTC m=+1694.160210334" lastFinishedPulling="2025-11-29 07:34:18.624941922 +0000 UTC m=+1697.515303015" observedRunningTime="2025-11-29 07:34:19.366263077 +0000 UTC m=+1698.256624180" watchObservedRunningTime="2025-11-29 07:34:19.373345501 +0000 UTC m=+1698.263706614" Nov 29 07:34:23 crc kubenswrapper[4731]: I1129 07:34:23.502156 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:23 crc kubenswrapper[4731]: I1129 07:34:23.502484 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:23 crc kubenswrapper[4731]: I1129 07:34:23.567597 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:24 crc kubenswrapper[4731]: I1129 07:34:24.459144 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:24 crc kubenswrapper[4731]: I1129 07:34:24.510723 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxjrz"] Nov 29 07:34:25 crc kubenswrapper[4731]: I1129 07:34:25.807208 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:34:25 crc kubenswrapper[4731]: E1129 07:34:25.807610 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:34:26 crc kubenswrapper[4731]: I1129 07:34:26.415920 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxjrz" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerName="registry-server" containerID="cri-o://c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620" gracePeriod=2 Nov 29 07:34:26 crc kubenswrapper[4731]: I1129 07:34:26.962124 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.102501 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-catalog-content\") pod \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.102873 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ddsl\" (UniqueName: \"kubernetes.io/projected/7bd434ad-bd66-49a3-97b8-2954b0e93c27-kube-api-access-4ddsl\") pod \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.102946 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-utilities\") pod \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\" (UID: \"7bd434ad-bd66-49a3-97b8-2954b0e93c27\") " Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.103753 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-utilities" (OuterVolumeSpecName: "utilities") pod "7bd434ad-bd66-49a3-97b8-2954b0e93c27" (UID: "7bd434ad-bd66-49a3-97b8-2954b0e93c27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.104888 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.108447 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd434ad-bd66-49a3-97b8-2954b0e93c27-kube-api-access-4ddsl" (OuterVolumeSpecName: "kube-api-access-4ddsl") pod "7bd434ad-bd66-49a3-97b8-2954b0e93c27" (UID: "7bd434ad-bd66-49a3-97b8-2954b0e93c27"). InnerVolumeSpecName "kube-api-access-4ddsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.155779 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bd434ad-bd66-49a3-97b8-2954b0e93c27" (UID: "7bd434ad-bd66-49a3-97b8-2954b0e93c27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.206693 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd434ad-bd66-49a3-97b8-2954b0e93c27-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.206732 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ddsl\" (UniqueName: \"kubernetes.io/projected/7bd434ad-bd66-49a3-97b8-2954b0e93c27-kube-api-access-4ddsl\") on node \"crc\" DevicePath \"\"" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.430116 4731 generic.go:334] "Generic (PLEG): container finished" podID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerID="c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620" exitCode=0 Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.430168 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxjrz" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.430202 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxjrz" event={"ID":"7bd434ad-bd66-49a3-97b8-2954b0e93c27","Type":"ContainerDied","Data":"c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620"} Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.430248 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxjrz" event={"ID":"7bd434ad-bd66-49a3-97b8-2954b0e93c27","Type":"ContainerDied","Data":"c668e9200f505d1ba93d32a2bdf51ee199f0b433d9ae02077229d1d1be401a1c"} Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.430270 4731 scope.go:117] "RemoveContainer" containerID="c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.458290 4731 scope.go:117] "RemoveContainer" containerID="a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.492921 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxjrz"] Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.498375 4731 scope.go:117] "RemoveContainer" containerID="b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.500950 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxjrz"] Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.540077 4731 scope.go:117] "RemoveContainer" containerID="c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620" Nov 29 07:34:27 crc kubenswrapper[4731]: E1129 07:34:27.540624 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620\": container with ID starting with c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620 not found: ID does not exist" containerID="c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.540671 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620"} err="failed to get container status \"c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620\": rpc error: code = NotFound desc = could not find container \"c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620\": container with ID starting with c0c1f4558ca0dc4056fff05c016a274c37ed8fee73caf2b3d7f295ee14992620 not found: ID does not exist" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.540698 4731 scope.go:117] "RemoveContainer" containerID="a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348" Nov 29 07:34:27 crc kubenswrapper[4731]: E1129 07:34:27.541207 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348\": container with ID starting with a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348 not found: ID does not exist" containerID="a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.541257 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348"} err="failed to get container status \"a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348\": rpc error: code = NotFound desc = could not find container \"a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348\": container with ID starting with a764e1e858c2ea475d3bcbce4767a8ded356111354ae9664215a02da11b17348 not found: ID does not exist" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.541290 4731 scope.go:117] "RemoveContainer" containerID="b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb" Nov 29 07:34:27 crc kubenswrapper[4731]: E1129 07:34:27.541869 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb\": container with ID starting with b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb not found: ID does not exist" containerID="b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.541904 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb"} err="failed to get container status \"b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb\": rpc error: code = NotFound desc = could not find container \"b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb\": container with ID starting with b22c9d6470e78e18908727eb60f5f116cda3394135cf3cd38bdcd0f75360daeb not found: ID does not exist" Nov 29 07:34:27 crc kubenswrapper[4731]: I1129 07:34:27.821089 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" path="/var/lib/kubelet/pods/7bd434ad-bd66-49a3-97b8-2954b0e93c27/volumes" Nov 29 07:34:37 crc kubenswrapper[4731]: I1129 07:34:37.051194 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-573c-account-create-update-85bxc"] Nov 29 07:34:37 crc kubenswrapper[4731]: I1129 07:34:37.065977 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mw9f7"] Nov 29 07:34:37 crc kubenswrapper[4731]: I1129 07:34:37.076875 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-573c-account-create-update-85bxc"] Nov 29 07:34:37 crc kubenswrapper[4731]: I1129 07:34:37.086465 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mw9f7"] Nov 29 07:34:37 crc kubenswrapper[4731]: I1129 07:34:37.828376 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4ef41c-1edd-4739-8e3e-d6ec21e2923a" path="/var/lib/kubelet/pods/3f4ef41c-1edd-4739-8e3e-d6ec21e2923a/volumes" Nov 29 07:34:37 crc kubenswrapper[4731]: I1129 07:34:37.831072 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b7b8c0-4be6-4417-8834-313b5ca3ff69" path="/var/lib/kubelet/pods/52b7b8c0-4be6-4417-8834-313b5ca3ff69/volumes" Nov 29 07:34:39 crc kubenswrapper[4731]: I1129 07:34:39.807695 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:34:39 crc kubenswrapper[4731]: E1129 07:34:39.808556 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:34:42 crc kubenswrapper[4731]: I1129 07:34:42.030297 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7xszw"] Nov 29 07:34:42 crc kubenswrapper[4731]: I1129 07:34:42.042831 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7xszw"] Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.031286 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c061-account-create-update-6d6qj"] Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.043131 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8f6e-account-create-update-nl2hh"] Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.060372 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x2fsv"] Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.071064 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8f6e-account-create-update-nl2hh"] Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.081476 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c061-account-create-update-6d6qj"] Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.091035 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x2fsv"] Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.819120 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0453cbcb-48ae-47ee-9a97-e9b4ab7da604" path="/var/lib/kubelet/pods/0453cbcb-48ae-47ee-9a97-e9b4ab7da604/volumes" Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.820748 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9acf0722-ab53-422c-8835-64a8615ad4e6" path="/var/lib/kubelet/pods/9acf0722-ab53-422c-8835-64a8615ad4e6/volumes" Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.821525 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7897925-7f61-47ea-b746-185a41fb854d" path="/var/lib/kubelet/pods/a7897925-7f61-47ea-b746-185a41fb854d/volumes" Nov 29 07:34:43 crc kubenswrapper[4731]: I1129 07:34:43.822335 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6701aa7-6736-40e6-aaf2-195fcf43c455" path="/var/lib/kubelet/pods/c6701aa7-6736-40e6-aaf2-195fcf43c455/volumes" Nov 29 07:34:52 crc kubenswrapper[4731]: I1129 07:34:52.807086 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:34:52 crc kubenswrapper[4731]: E1129 07:34:52.808368 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:35:06 crc kubenswrapper[4731]: I1129 07:35:06.807860 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:35:06 crc kubenswrapper[4731]: E1129 07:35:06.808856 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:35:08 crc kubenswrapper[4731]: I1129 07:35:08.042489 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8w7f8"] Nov 29 07:35:08 crc kubenswrapper[4731]: I1129 07:35:08.057344 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8w7f8"] Nov 29 07:35:09 crc kubenswrapper[4731]: I1129 07:35:09.817875 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6986d025-7080-457e-b2ce-88d8ae965c70" path="/var/lib/kubelet/pods/6986d025-7080-457e-b2ce-88d8ae965c70/volumes" Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.044919 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wzpzv"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.055779 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ss292"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.067295 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-22e7-account-create-update-svpcx"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.078400 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ba51-account-create-update-drl7b"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.087993 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1fef-account-create-update-k9ddk"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.097386 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-22e7-account-create-update-svpcx"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.108998 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ss292"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.118793 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wzpzv"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.127869 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ba51-account-create-update-drl7b"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.136769 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1fef-account-create-update-k9ddk"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.145697 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ktq2t"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.156205 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ktq2t"] Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.818893 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450170e3-d7cb-4283-bae9-3350a8558f66" path="/var/lib/kubelet/pods/450170e3-d7cb-4283-bae9-3350a8558f66/volumes" Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.819465 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456fff3a-5ed5-4def-b25d-3923d97a3577" path="/var/lib/kubelet/pods/456fff3a-5ed5-4def-b25d-3923d97a3577/volumes" Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.820835 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b04cdb0-e1e8-4807-8fd3-6f2086497c72" path="/var/lib/kubelet/pods/5b04cdb0-e1e8-4807-8fd3-6f2086497c72/volumes" Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.824199 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94728e01-e829-4d10-9311-defe6cd10ff9" path="/var/lib/kubelet/pods/94728e01-e829-4d10-9311-defe6cd10ff9/volumes" Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.825098 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9abe846-7302-4ea1-8423-bc1a2e81d051" path="/var/lib/kubelet/pods/a9abe846-7302-4ea1-8423-bc1a2e81d051/volumes" Nov 29 07:35:11 crc kubenswrapper[4731]: I1129 07:35:11.826017 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d75320ff-8458-4ed0-977c-46e972527687" path="/var/lib/kubelet/pods/d75320ff-8458-4ed0-977c-46e972527687/volumes" Nov 29 07:35:18 crc kubenswrapper[4731]: I1129 07:35:18.952781 4731 scope.go:117] "RemoveContainer" containerID="3a9b264157ef5c375c44ced99437adea4f91d0bc7471d42c109d31ab6ce49779" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.017499 4731 scope.go:117] "RemoveContainer" containerID="099b944ac64398ce6681bb304100e17570bba32c94c5db5f6727a5589a88b1b5" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.077263 4731 scope.go:117] "RemoveContainer" containerID="e1de5374a22c79286e042c9445444e7faca5f78e241ca9d72ffffacd09e0b5a0" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.146164 4731 scope.go:117] "RemoveContainer" containerID="7c043bf1c4856c80b5a659661e3f100f1300b9fc6a0d697d71be6985fbb51be4" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.226487 4731 scope.go:117] "RemoveContainer" containerID="bef65603843ce1608eca89ef0e01614468f8947009e8acc57409db60c4b0ee29" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.250304 4731 scope.go:117] "RemoveContainer" containerID="deb019eeb6ddd2972ff3e90715778ff0b00343c1833c5eb61c401f12bbe0b1dc" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.276100 4731 scope.go:117] "RemoveContainer" containerID="3e522f983f4993d28b71672251b90ac846f2009f9e801c28d90b1bb603272c5d" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.344265 4731 scope.go:117] "RemoveContainer" containerID="18b3c42a964d1e2df816c6a241a0d9500ac5585327741e5290d50e3631b900bf" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.370276 4731 scope.go:117] "RemoveContainer" containerID="639c771777e2016fad42d31fe532b7bf9e1ee8e9b1092ee16ef8c069928fd77e" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.404553 4731 scope.go:117] "RemoveContainer" containerID="2f11ef3592a58828467d08211ca586d70142bdc44a7061304720261feb1c6891" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.431492 4731 scope.go:117] "RemoveContainer" containerID="3468f997d660ea5df6accff4a33b6e89ff448ee14f89f28c05e26e032fcc4d9f" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.462530 4731 scope.go:117] "RemoveContainer" containerID="794cfd49569c4f1c58ed728cf18001d4a59cb2d7d42adc0e4ff2645d03b41421" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.488160 4731 scope.go:117] "RemoveContainer" containerID="36c2927af8a4f5c9819180a3824e1ff07f85d80a54ea662674351f2aef39604b" Nov 29 07:35:19 crc kubenswrapper[4731]: I1129 07:35:19.509739 4731 scope.go:117] "RemoveContainer" containerID="e9e1c34e8a156b051cbc319b181be92b2048c71c8e97be6d705bb890b58a4f00" Nov 29 07:35:20 crc kubenswrapper[4731]: I1129 07:35:20.806762 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:35:20 crc kubenswrapper[4731]: E1129 07:35:20.807878 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:35:26 crc kubenswrapper[4731]: I1129 07:35:26.079710 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-nft5q"] Nov 29 07:35:26 crc kubenswrapper[4731]: I1129 07:35:26.092189 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-nft5q"] Nov 29 07:35:27 crc kubenswrapper[4731]: I1129 07:35:27.819025 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111" path="/var/lib/kubelet/pods/dcdaaa2a-ccbf-4158-8c8a-d5836dbdd111/volumes" Nov 29 07:35:31 crc kubenswrapper[4731]: I1129 07:35:31.150999 4731 generic.go:334] "Generic (PLEG): container finished" podID="20126f8e-6e2a-4035-862f-ab9c789511a0" containerID="20f9cc504c2a68fa2d1bd29e4405da4bdaeaaa5c78cd07f0ea55403b2b35f8f8" exitCode=0 Nov 29 07:35:31 crc kubenswrapper[4731]: I1129 07:35:31.151095 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" event={"ID":"20126f8e-6e2a-4035-862f-ab9c789511a0","Type":"ContainerDied","Data":"20f9cc504c2a68fa2d1bd29e4405da4bdaeaaa5c78cd07f0ea55403b2b35f8f8"} Nov 29 07:35:31 crc kubenswrapper[4731]: I1129 07:35:31.819414 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:35:31 crc kubenswrapper[4731]: E1129 07:35:31.820940 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.592529 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.739262 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8dz4\" (UniqueName: \"kubernetes.io/projected/20126f8e-6e2a-4035-862f-ab9c789511a0-kube-api-access-z8dz4\") pod \"20126f8e-6e2a-4035-862f-ab9c789511a0\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.739860 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-inventory\") pod \"20126f8e-6e2a-4035-862f-ab9c789511a0\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.739971 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-bootstrap-combined-ca-bundle\") pod \"20126f8e-6e2a-4035-862f-ab9c789511a0\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.740150 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-ssh-key\") pod \"20126f8e-6e2a-4035-862f-ab9c789511a0\" (UID: \"20126f8e-6e2a-4035-862f-ab9c789511a0\") " Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.746932 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20126f8e-6e2a-4035-862f-ab9c789511a0-kube-api-access-z8dz4" (OuterVolumeSpecName: "kube-api-access-z8dz4") pod "20126f8e-6e2a-4035-862f-ab9c789511a0" (UID: "20126f8e-6e2a-4035-862f-ab9c789511a0"). InnerVolumeSpecName "kube-api-access-z8dz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.747841 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "20126f8e-6e2a-4035-862f-ab9c789511a0" (UID: "20126f8e-6e2a-4035-862f-ab9c789511a0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.776281 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "20126f8e-6e2a-4035-862f-ab9c789511a0" (UID: "20126f8e-6e2a-4035-862f-ab9c789511a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.777735 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-inventory" (OuterVolumeSpecName: "inventory") pod "20126f8e-6e2a-4035-862f-ab9c789511a0" (UID: "20126f8e-6e2a-4035-862f-ab9c789511a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.843778 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8dz4\" (UniqueName: \"kubernetes.io/projected/20126f8e-6e2a-4035-862f-ab9c789511a0-kube-api-access-z8dz4\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.843834 4731 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.843846 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:32 crc kubenswrapper[4731]: I1129 07:35:32.843856 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20126f8e-6e2a-4035-862f-ab9c789511a0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.174380 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" event={"ID":"20126f8e-6e2a-4035-862f-ab9c789511a0","Type":"ContainerDied","Data":"f086913eced082ec3318cfc9c9e6a34321c8ceae14a2988e6f38fabd26460a02"} Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.174466 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f086913eced082ec3318cfc9c9e6a34321c8ceae14a2988e6f38fabd26460a02" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.174492 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.270967 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr"] Nov 29 07:35:33 crc kubenswrapper[4731]: E1129 07:35:33.271514 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerName="extract-utilities" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.271538 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerName="extract-utilities" Nov 29 07:35:33 crc kubenswrapper[4731]: E1129 07:35:33.271547 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerName="registry-server" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.271554 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerName="registry-server" Nov 29 07:35:33 crc kubenswrapper[4731]: E1129 07:35:33.271607 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerName="extract-content" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.271615 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerName="extract-content" Nov 29 07:35:33 crc kubenswrapper[4731]: E1129 07:35:33.271638 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20126f8e-6e2a-4035-862f-ab9c789511a0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.271648 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="20126f8e-6e2a-4035-862f-ab9c789511a0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.271881 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="20126f8e-6e2a-4035-862f-ab9c789511a0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.271926 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd434ad-bd66-49a3-97b8-2954b0e93c27" containerName="registry-server" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.272970 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.275683 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.275717 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.275942 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.276165 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.286857 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr"] Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.360497 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.360734 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6m6r\" (UniqueName: \"kubernetes.io/projected/00ca821e-c39a-48c3-8318-2a09e190bdcf-kube-api-access-x6m6r\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.360769 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.462868 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.463462 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6m6r\" (UniqueName: \"kubernetes.io/projected/00ca821e-c39a-48c3-8318-2a09e190bdcf-kube-api-access-x6m6r\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.463505 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.470581 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.472462 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.483135 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6m6r\" (UniqueName: \"kubernetes.io/projected/00ca821e-c39a-48c3-8318-2a09e190bdcf-kube-api-access-x6m6r\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-47scr\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:33 crc kubenswrapper[4731]: I1129 07:35:33.594495 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:35:34 crc kubenswrapper[4731]: I1129 07:35:34.153955 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr"] Nov 29 07:35:34 crc kubenswrapper[4731]: I1129 07:35:34.160925 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:35:34 crc kubenswrapper[4731]: I1129 07:35:34.189744 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" event={"ID":"00ca821e-c39a-48c3-8318-2a09e190bdcf","Type":"ContainerStarted","Data":"ef405e7c4a89b15bcec286f5d1ddc6486ab0aba580e9a586db88502c053f9bee"} Nov 29 07:35:35 crc kubenswrapper[4731]: I1129 07:35:35.203220 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" event={"ID":"00ca821e-c39a-48c3-8318-2a09e190bdcf","Type":"ContainerStarted","Data":"828c832bb8075ce4265acfd294f7b3da7402fc98c0527168c0c56f9dbf8316fc"} Nov 29 07:35:35 crc kubenswrapper[4731]: I1129 07:35:35.229445 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" podStartSLOduration=1.612237957 podStartE2EDuration="2.229418036s" podCreationTimestamp="2025-11-29 07:35:33 +0000 UTC" firstStartedPulling="2025-11-29 07:35:34.160628951 +0000 UTC m=+1773.050990054" lastFinishedPulling="2025-11-29 07:35:34.77780903 +0000 UTC m=+1773.668170133" observedRunningTime="2025-11-29 07:35:35.220472808 +0000 UTC m=+1774.110833911" watchObservedRunningTime="2025-11-29 07:35:35.229418036 +0000 UTC m=+1774.119779139" Nov 29 07:35:44 crc kubenswrapper[4731]: I1129 07:35:44.807480 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:35:44 crc kubenswrapper[4731]: E1129 07:35:44.808478 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:35:58 crc kubenswrapper[4731]: I1129 07:35:58.808338 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:35:58 crc kubenswrapper[4731]: E1129 07:35:58.809788 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:36:13 crc kubenswrapper[4731]: I1129 07:36:13.812919 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:36:14 crc kubenswrapper[4731]: I1129 07:36:14.626231 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"fea8fd5b340206f2b38d570102ab425e9491bb5208055282d97c11b2fcd67d4e"} Nov 29 07:36:18 crc kubenswrapper[4731]: I1129 07:36:18.047129 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j6pdq"] Nov 29 07:36:18 crc kubenswrapper[4731]: I1129 07:36:18.062088 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j6pdq"] Nov 29 07:36:19 crc kubenswrapper[4731]: I1129 07:36:19.819354 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe696e8-b9af-4710-a81f-4fb69481cf3b" path="/var/lib/kubelet/pods/dbe696e8-b9af-4710-a81f-4fb69481cf3b/volumes" Nov 29 07:36:19 crc kubenswrapper[4731]: I1129 07:36:19.884533 4731 scope.go:117] "RemoveContainer" containerID="6589041cde6b21ef09af6738cc65ac22979f13b42abafd743cfd680e5ed860b9" Nov 29 07:36:19 crc kubenswrapper[4731]: I1129 07:36:19.927640 4731 scope.go:117] "RemoveContainer" containerID="012d7698d223529fd48395017381af41ba5acdf3fc9fe83a15e328727eaaafff" Nov 29 07:36:21 crc kubenswrapper[4731]: I1129 07:36:21.033369 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-x6bxr"] Nov 29 07:36:21 crc kubenswrapper[4731]: I1129 07:36:21.044044 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fbk9s"] Nov 29 07:36:21 crc kubenswrapper[4731]: I1129 07:36:21.053403 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-x6bxr"] Nov 29 07:36:21 crc kubenswrapper[4731]: I1129 07:36:21.080679 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fbk9s"] Nov 29 07:36:21 crc kubenswrapper[4731]: I1129 07:36:21.835497 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bcd648-c6e2-4b6e-a660-da2f47f09a06" path="/var/lib/kubelet/pods/13bcd648-c6e2-4b6e-a660-da2f47f09a06/volumes" Nov 29 07:36:21 crc kubenswrapper[4731]: I1129 07:36:21.838095 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4589d89-a761-4510-bd4c-55a6a3e620c4" path="/var/lib/kubelet/pods/a4589d89-a761-4510-bd4c-55a6a3e620c4/volumes" Nov 29 07:36:31 crc kubenswrapper[4731]: I1129 07:36:31.040161 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qjjnr"] Nov 29 07:36:31 crc kubenswrapper[4731]: I1129 07:36:31.048153 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qjjnr"] Nov 29 07:36:31 crc kubenswrapper[4731]: I1129 07:36:31.822765 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d843330-ffae-4bc9-a8b3-c2df891a1aae" path="/var/lib/kubelet/pods/2d843330-ffae-4bc9-a8b3-c2df891a1aae/volumes" Nov 29 07:36:32 crc kubenswrapper[4731]: I1129 07:36:32.040322 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-zcx9z"] Nov 29 07:36:32 crc kubenswrapper[4731]: I1129 07:36:32.053453 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-zcx9z"] Nov 29 07:36:33 crc kubenswrapper[4731]: I1129 07:36:33.832999 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af027cc-cbd4-4f3a-ad25-2ef5b126d590" path="/var/lib/kubelet/pods/9af027cc-cbd4-4f3a-ad25-2ef5b126d590/volumes" Nov 29 07:37:20 crc kubenswrapper[4731]: I1129 07:37:20.054704 4731 scope.go:117] "RemoveContainer" containerID="ba4cba12c8c3bee5b3db297483a31412626bf72e41d5950966b7bcad6321e931" Nov 29 07:37:20 crc kubenswrapper[4731]: I1129 07:37:20.095144 4731 scope.go:117] "RemoveContainer" containerID="bb2475f193ed50de45ecd1da5f6d6fad85f593e9dd2586d0dec67678e4586bdc" Nov 29 07:37:20 crc kubenswrapper[4731]: I1129 07:37:20.160993 4731 scope.go:117] "RemoveContainer" containerID="35baaa7729762d17b4d7d6f2de4d3968e88ea07e8ec8701ab4a49abef88ae6f3" Nov 29 07:37:20 crc kubenswrapper[4731]: I1129 07:37:20.240384 4731 scope.go:117] "RemoveContainer" containerID="e1fd7555949e5a475b2a562e40c9e94428dd257ddf922b4101954f25369688f3" Nov 29 07:37:25 crc kubenswrapper[4731]: I1129 07:37:25.356883 4731 generic.go:334] "Generic (PLEG): container finished" podID="00ca821e-c39a-48c3-8318-2a09e190bdcf" containerID="828c832bb8075ce4265acfd294f7b3da7402fc98c0527168c0c56f9dbf8316fc" exitCode=0 Nov 29 07:37:25 crc kubenswrapper[4731]: I1129 07:37:25.356962 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" event={"ID":"00ca821e-c39a-48c3-8318-2a09e190bdcf","Type":"ContainerDied","Data":"828c832bb8075ce4265acfd294f7b3da7402fc98c0527168c0c56f9dbf8316fc"} Nov 29 07:37:26 crc kubenswrapper[4731]: I1129 07:37:26.825460 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:37:26 crc kubenswrapper[4731]: I1129 07:37:26.933433 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-ssh-key\") pod \"00ca821e-c39a-48c3-8318-2a09e190bdcf\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " Nov 29 07:37:26 crc kubenswrapper[4731]: I1129 07:37:26.933661 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6m6r\" (UniqueName: \"kubernetes.io/projected/00ca821e-c39a-48c3-8318-2a09e190bdcf-kube-api-access-x6m6r\") pod \"00ca821e-c39a-48c3-8318-2a09e190bdcf\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " Nov 29 07:37:26 crc kubenswrapper[4731]: I1129 07:37:26.933866 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-inventory\") pod \"00ca821e-c39a-48c3-8318-2a09e190bdcf\" (UID: \"00ca821e-c39a-48c3-8318-2a09e190bdcf\") " Nov 29 07:37:26 crc kubenswrapper[4731]: I1129 07:37:26.940982 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ca821e-c39a-48c3-8318-2a09e190bdcf-kube-api-access-x6m6r" (OuterVolumeSpecName: "kube-api-access-x6m6r") pod "00ca821e-c39a-48c3-8318-2a09e190bdcf" (UID: "00ca821e-c39a-48c3-8318-2a09e190bdcf"). InnerVolumeSpecName "kube-api-access-x6m6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:26 crc kubenswrapper[4731]: I1129 07:37:26.964734 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-inventory" (OuterVolumeSpecName: "inventory") pod "00ca821e-c39a-48c3-8318-2a09e190bdcf" (UID: "00ca821e-c39a-48c3-8318-2a09e190bdcf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:37:26 crc kubenswrapper[4731]: I1129 07:37:26.975497 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "00ca821e-c39a-48c3-8318-2a09e190bdcf" (UID: "00ca821e-c39a-48c3-8318-2a09e190bdcf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.036665 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6m6r\" (UniqueName: \"kubernetes.io/projected/00ca821e-c39a-48c3-8318-2a09e190bdcf-kube-api-access-x6m6r\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.036728 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.036740 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00ca821e-c39a-48c3-8318-2a09e190bdcf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.382523 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" event={"ID":"00ca821e-c39a-48c3-8318-2a09e190bdcf","Type":"ContainerDied","Data":"ef405e7c4a89b15bcec286f5d1ddc6486ab0aba580e9a586db88502c053f9bee"} Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.382612 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef405e7c4a89b15bcec286f5d1ddc6486ab0aba580e9a586db88502c053f9bee" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.382870 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-47scr" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.493171 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n"] Nov 29 07:37:27 crc kubenswrapper[4731]: E1129 07:37:27.493642 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ca821e-c39a-48c3-8318-2a09e190bdcf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.493663 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ca821e-c39a-48c3-8318-2a09e190bdcf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.493918 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ca821e-c39a-48c3-8318-2a09e190bdcf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.494645 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.498427 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.499113 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.499458 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.499643 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.508488 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n"] Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.649256 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.649847 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.650039 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27lbf\" (UniqueName: \"kubernetes.io/projected/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-kube-api-access-27lbf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.752341 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.752500 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.752559 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27lbf\" (UniqueName: \"kubernetes.io/projected/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-kube-api-access-27lbf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.758684 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.760303 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.795738 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27lbf\" (UniqueName: \"kubernetes.io/projected/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-kube-api-access-27lbf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9n48n\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:27 crc kubenswrapper[4731]: I1129 07:37:27.816512 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:37:28 crc kubenswrapper[4731]: I1129 07:37:28.378827 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n"] Nov 29 07:37:28 crc kubenswrapper[4731]: W1129 07:37:28.387436 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff0b9fa_bd65_4c7a_af24_2e4bd4ce5045.slice/crio-958923d7166e0c08fb49ed20c7aa6784bf22c33ae7bf0ec65070e8d5f8a0bf86 WatchSource:0}: Error finding container 958923d7166e0c08fb49ed20c7aa6784bf22c33ae7bf0ec65070e8d5f8a0bf86: Status 404 returned error can't find the container with id 958923d7166e0c08fb49ed20c7aa6784bf22c33ae7bf0ec65070e8d5f8a0bf86 Nov 29 07:37:29 crc kubenswrapper[4731]: I1129 07:37:29.409945 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" event={"ID":"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045","Type":"ContainerStarted","Data":"958923d7166e0c08fb49ed20c7aa6784bf22c33ae7bf0ec65070e8d5f8a0bf86"} Nov 29 07:37:30 crc kubenswrapper[4731]: I1129 07:37:30.422468 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" event={"ID":"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045","Type":"ContainerStarted","Data":"4b82b6b4d38a913071098ad705f4a1f83def2326d3b5a65ab234e7a13f9e5d81"} Nov 29 07:37:30 crc kubenswrapper[4731]: I1129 07:37:30.442116 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" podStartSLOduration=1.946938555 podStartE2EDuration="3.442091182s" podCreationTimestamp="2025-11-29 07:37:27 +0000 UTC" firstStartedPulling="2025-11-29 07:37:28.391156125 +0000 UTC m=+1887.281517228" lastFinishedPulling="2025-11-29 07:37:29.886308762 +0000 UTC m=+1888.776669855" observedRunningTime="2025-11-29 07:37:30.438854719 +0000 UTC m=+1889.329215822" watchObservedRunningTime="2025-11-29 07:37:30.442091182 +0000 UTC m=+1889.332452295" Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.074589 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2711-account-create-update-b2w2h"] Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.083936 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-6mk6z"] Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.091450 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2711-account-create-update-b2w2h"] Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.098968 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e0d1-account-create-update-7ppfv"] Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.112471 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-6mk6z"] Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.121649 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e0d1-account-create-update-7ppfv"] Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.818399 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7590e82-ed2d-42d4-ae30-b581dc4517b9" path="/var/lib/kubelet/pods/c7590e82-ed2d-42d4-ae30-b581dc4517b9/volumes" Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.819159 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5720290-7d84-4d00-bf6a-8665ccc9cd09" path="/var/lib/kubelet/pods/e5720290-7d84-4d00-bf6a-8665ccc9cd09/volumes" Nov 29 07:37:47 crc kubenswrapper[4731]: I1129 07:37:47.819818 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec906d2b-9805-4e9b-8273-80a3488c76e5" path="/var/lib/kubelet/pods/ec906d2b-9805-4e9b-8273-80a3488c76e5/volumes" Nov 29 07:37:48 crc kubenswrapper[4731]: I1129 07:37:48.033105 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-76xc2"] Nov 29 07:37:48 crc kubenswrapper[4731]: I1129 07:37:48.044521 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jtd4s"] Nov 29 07:37:48 crc kubenswrapper[4731]: I1129 07:37:48.057170 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c9a4-account-create-update-dwcqz"] Nov 29 07:37:48 crc kubenswrapper[4731]: I1129 07:37:48.067150 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jtd4s"] Nov 29 07:37:48 crc kubenswrapper[4731]: I1129 07:37:48.076971 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c9a4-account-create-update-dwcqz"] Nov 29 07:37:48 crc kubenswrapper[4731]: I1129 07:37:48.088073 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-76xc2"] Nov 29 07:37:49 crc kubenswrapper[4731]: I1129 07:37:49.818722 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f74f3d-e81b-445f-b4df-09f17e389b52" path="/var/lib/kubelet/pods/37f74f3d-e81b-445f-b4df-09f17e389b52/volumes" Nov 29 07:37:49 crc kubenswrapper[4731]: I1129 07:37:49.819955 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea23e96-8b0c-413c-9240-80f8ecd2af01" path="/var/lib/kubelet/pods/aea23e96-8b0c-413c-9240-80f8ecd2af01/volumes" Nov 29 07:37:49 crc kubenswrapper[4731]: I1129 07:37:49.820645 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d5f400-fe07-4f0f-ae45-b6055e4908fc" path="/var/lib/kubelet/pods/d9d5f400-fe07-4f0f-ae45-b6055e4908fc/volumes" Nov 29 07:38:20 crc kubenswrapper[4731]: I1129 07:38:20.364251 4731 scope.go:117] "RemoveContainer" containerID="1d7a432469f2d12aa10d06a0b82a91292de909947f98fcb3665a73bfbea52bf5" Nov 29 07:38:20 crc kubenswrapper[4731]: I1129 07:38:20.401168 4731 scope.go:117] "RemoveContainer" containerID="ef0ca46537a145f02f24cd242cabc45acaf029a80f5b8961b3fa4a112fe23a9d" Nov 29 07:38:20 crc kubenswrapper[4731]: I1129 07:38:20.462182 4731 scope.go:117] "RemoveContainer" containerID="674f7f4b2e738914d0a6f19b7026f8bfdf2616bd8b47ab5718a9e55b0f65f98d" Nov 29 07:38:20 crc kubenswrapper[4731]: I1129 07:38:20.507818 4731 scope.go:117] "RemoveContainer" containerID="26fe4eb63f9dca14c377d6ff5b4f8ccebd002db978d870dbce923a44b8d8f98e" Nov 29 07:38:20 crc kubenswrapper[4731]: I1129 07:38:20.583856 4731 scope.go:117] "RemoveContainer" containerID="c0ce0ef79c86515907bc3adb596fadd91c5c5e5faa7e2b35ef9594cebf198ff0" Nov 29 07:38:20 crc kubenswrapper[4731]: I1129 07:38:20.616379 4731 scope.go:117] "RemoveContainer" containerID="cbc59b9d53c41b00bb5058b86e04f88cdae10a258658edd5f403067b3beff8c6" Nov 29 07:38:24 crc kubenswrapper[4731]: I1129 07:38:24.049087 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ncqzw"] Nov 29 07:38:24 crc kubenswrapper[4731]: I1129 07:38:24.061853 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ncqzw"] Nov 29 07:38:25 crc kubenswrapper[4731]: I1129 07:38:25.821082 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ff79d0-d925-4219-8603-c5af185585f4" path="/var/lib/kubelet/pods/b8ff79d0-d925-4219-8603-c5af185585f4/volumes" Nov 29 07:38:33 crc kubenswrapper[4731]: I1129 07:38:33.003153 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:38:33 crc kubenswrapper[4731]: I1129 07:38:33.003801 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.416359 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mww8m"] Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.421387 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.441008 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mww8m"] Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.629092 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-catalog-content\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.629215 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-utilities\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.629450 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7x2f\" (UniqueName: \"kubernetes.io/projected/1ce842e5-ced7-45bd-8322-00f9c8418aa4-kube-api-access-t7x2f\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.731917 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-utilities\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.732006 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7x2f\" (UniqueName: \"kubernetes.io/projected/1ce842e5-ced7-45bd-8322-00f9c8418aa4-kube-api-access-t7x2f\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.732106 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-catalog-content\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.733213 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-utilities\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.733647 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-catalog-content\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:43 crc kubenswrapper[4731]: I1129 07:38:43.788769 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7x2f\" (UniqueName: \"kubernetes.io/projected/1ce842e5-ced7-45bd-8322-00f9c8418aa4-kube-api-access-t7x2f\") pod \"redhat-operators-mww8m\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:44 crc kubenswrapper[4731]: I1129 07:38:44.055892 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:44 crc kubenswrapper[4731]: I1129 07:38:44.597362 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mww8m"] Nov 29 07:38:45 crc kubenswrapper[4731]: I1129 07:38:45.329944 4731 generic.go:334] "Generic (PLEG): container finished" podID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerID="ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28" exitCode=0 Nov 29 07:38:45 crc kubenswrapper[4731]: I1129 07:38:45.330076 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mww8m" event={"ID":"1ce842e5-ced7-45bd-8322-00f9c8418aa4","Type":"ContainerDied","Data":"ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28"} Nov 29 07:38:45 crc kubenswrapper[4731]: I1129 07:38:45.330234 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mww8m" event={"ID":"1ce842e5-ced7-45bd-8322-00f9c8418aa4","Type":"ContainerStarted","Data":"d3c2569b7f66cc8cf125f7646819fe02e1dcb6afa191453cc41ffc0063866de9"} Nov 29 07:38:46 crc kubenswrapper[4731]: I1129 07:38:46.345344 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mww8m" event={"ID":"1ce842e5-ced7-45bd-8322-00f9c8418aa4","Type":"ContainerStarted","Data":"5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6"} Nov 29 07:38:48 crc kubenswrapper[4731]: I1129 07:38:48.058111 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-v6lwg"] Nov 29 07:38:48 crc kubenswrapper[4731]: I1129 07:38:48.066839 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-v6lwg"] Nov 29 07:38:48 crc kubenswrapper[4731]: I1129 07:38:48.370767 4731 generic.go:334] "Generic (PLEG): container finished" podID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerID="5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6" exitCode=0 Nov 29 07:38:48 crc kubenswrapper[4731]: I1129 07:38:48.370848 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mww8m" event={"ID":"1ce842e5-ced7-45bd-8322-00f9c8418aa4","Type":"ContainerDied","Data":"5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6"} Nov 29 07:38:49 crc kubenswrapper[4731]: I1129 07:38:49.387110 4731 generic.go:334] "Generic (PLEG): container finished" podID="2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045" containerID="4b82b6b4d38a913071098ad705f4a1f83def2326d3b5a65ab234e7a13f9e5d81" exitCode=0 Nov 29 07:38:49 crc kubenswrapper[4731]: I1129 07:38:49.387193 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" event={"ID":"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045","Type":"ContainerDied","Data":"4b82b6b4d38a913071098ad705f4a1f83def2326d3b5a65ab234e7a13f9e5d81"} Nov 29 07:38:49 crc kubenswrapper[4731]: I1129 07:38:49.822687 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd5f3ab-575e-44b6-aa39-c3b5c44d85b8" path="/var/lib/kubelet/pods/abd5f3ab-575e-44b6-aa39-c3b5c44d85b8/volumes" Nov 29 07:38:50 crc kubenswrapper[4731]: I1129 07:38:50.834120 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:38:50 crc kubenswrapper[4731]: I1129 07:38:50.911085 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-ssh-key\") pod \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " Nov 29 07:38:50 crc kubenswrapper[4731]: I1129 07:38:50.911305 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27lbf\" (UniqueName: \"kubernetes.io/projected/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-kube-api-access-27lbf\") pod \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " Nov 29 07:38:50 crc kubenswrapper[4731]: I1129 07:38:50.911381 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-inventory\") pod \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\" (UID: \"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045\") " Nov 29 07:38:50 crc kubenswrapper[4731]: I1129 07:38:50.927905 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-kube-api-access-27lbf" (OuterVolumeSpecName: "kube-api-access-27lbf") pod "2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045" (UID: "2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045"). InnerVolumeSpecName "kube-api-access-27lbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:50 crc kubenswrapper[4731]: I1129 07:38:50.945940 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045" (UID: "2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:50 crc kubenswrapper[4731]: I1129 07:38:50.956012 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-inventory" (OuterVolumeSpecName: "inventory") pod "2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045" (UID: "2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.015867 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27lbf\" (UniqueName: \"kubernetes.io/projected/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-kube-api-access-27lbf\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.015955 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.015970 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.412378 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mww8m" event={"ID":"1ce842e5-ced7-45bd-8322-00f9c8418aa4","Type":"ContainerStarted","Data":"65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e"} Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.416482 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" event={"ID":"2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045","Type":"ContainerDied","Data":"958923d7166e0c08fb49ed20c7aa6784bf22c33ae7bf0ec65070e8d5f8a0bf86"} Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.416523 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="958923d7166e0c08fb49ed20c7aa6784bf22c33ae7bf0ec65070e8d5f8a0bf86" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.416765 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9n48n" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.437557 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mww8m" podStartSLOduration=2.831498978 podStartE2EDuration="8.437532926s" podCreationTimestamp="2025-11-29 07:38:43 +0000 UTC" firstStartedPulling="2025-11-29 07:38:45.3318871 +0000 UTC m=+1964.222248203" lastFinishedPulling="2025-11-29 07:38:50.937921048 +0000 UTC m=+1969.828282151" observedRunningTime="2025-11-29 07:38:51.434267732 +0000 UTC m=+1970.324628835" watchObservedRunningTime="2025-11-29 07:38:51.437532926 +0000 UTC m=+1970.327894029" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.517195 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7"] Nov 29 07:38:51 crc kubenswrapper[4731]: E1129 07:38:51.517973 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.518003 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.518295 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.520173 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.523103 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.523460 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.531226 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.531726 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.536707 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7"] Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.633646 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.634113 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.634177 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn462\" (UniqueName: \"kubernetes.io/projected/75231e03-f059-43f8-8533-94035f23806f-kube-api-access-mn462\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.735998 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.736062 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.736108 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn462\" (UniqueName: \"kubernetes.io/projected/75231e03-f059-43f8-8533-94035f23806f-kube-api-access-mn462\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.741143 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.741752 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.756398 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn462\" (UniqueName: \"kubernetes.io/projected/75231e03-f059-43f8-8533-94035f23806f-kube-api-access-mn462\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mztj7\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:51 crc kubenswrapper[4731]: I1129 07:38:51.844958 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:38:52 crc kubenswrapper[4731]: I1129 07:38:52.514916 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7"] Nov 29 07:38:52 crc kubenswrapper[4731]: W1129 07:38:52.518869 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75231e03_f059_43f8_8533_94035f23806f.slice/crio-716a737659e508b6093e36d5da7b35571dc685d8294d7a14e1e73aebdf23be6a WatchSource:0}: Error finding container 716a737659e508b6093e36d5da7b35571dc685d8294d7a14e1e73aebdf23be6a: Status 404 returned error can't find the container with id 716a737659e508b6093e36d5da7b35571dc685d8294d7a14e1e73aebdf23be6a Nov 29 07:38:53 crc kubenswrapper[4731]: I1129 07:38:53.438895 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" event={"ID":"75231e03-f059-43f8-8533-94035f23806f","Type":"ContainerStarted","Data":"76924f7ad8b7509129eac81654bb41bdcaae168d55ea5713a7a02e70be994f0b"} Nov 29 07:38:53 crc kubenswrapper[4731]: I1129 07:38:53.438974 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" event={"ID":"75231e03-f059-43f8-8533-94035f23806f","Type":"ContainerStarted","Data":"716a737659e508b6093e36d5da7b35571dc685d8294d7a14e1e73aebdf23be6a"} Nov 29 07:38:53 crc kubenswrapper[4731]: I1129 07:38:53.471105 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" podStartSLOduration=1.979113406 podStartE2EDuration="2.471078654s" podCreationTimestamp="2025-11-29 07:38:51 +0000 UTC" firstStartedPulling="2025-11-29 07:38:52.521636838 +0000 UTC m=+1971.411997941" lastFinishedPulling="2025-11-29 07:38:53.013602086 +0000 UTC m=+1971.903963189" observedRunningTime="2025-11-29 07:38:53.460030287 +0000 UTC m=+1972.350391390" watchObservedRunningTime="2025-11-29 07:38:53.471078654 +0000 UTC m=+1972.361439747" Nov 29 07:38:54 crc kubenswrapper[4731]: I1129 07:38:54.056492 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:54 crc kubenswrapper[4731]: I1129 07:38:54.057064 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:38:55 crc kubenswrapper[4731]: I1129 07:38:55.117055 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mww8m" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="registry-server" probeResult="failure" output=< Nov 29 07:38:55 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 29 07:38:55 crc kubenswrapper[4731]: > Nov 29 07:38:57 crc kubenswrapper[4731]: I1129 07:38:57.047341 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6jkj"] Nov 29 07:38:57 crc kubenswrapper[4731]: I1129 07:38:57.060297 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6jkj"] Nov 29 07:38:57 crc kubenswrapper[4731]: I1129 07:38:57.818915 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="737e38bd-78bb-41ef-acce-f65a427d5bd3" path="/var/lib/kubelet/pods/737e38bd-78bb-41ef-acce-f65a427d5bd3/volumes" Nov 29 07:38:59 crc kubenswrapper[4731]: I1129 07:38:59.500886 4731 generic.go:334] "Generic (PLEG): container finished" podID="75231e03-f059-43f8-8533-94035f23806f" containerID="76924f7ad8b7509129eac81654bb41bdcaae168d55ea5713a7a02e70be994f0b" exitCode=0 Nov 29 07:38:59 crc kubenswrapper[4731]: I1129 07:38:59.500987 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" event={"ID":"75231e03-f059-43f8-8533-94035f23806f","Type":"ContainerDied","Data":"76924f7ad8b7509129eac81654bb41bdcaae168d55ea5713a7a02e70be994f0b"} Nov 29 07:39:00 crc kubenswrapper[4731]: I1129 07:39:00.980140 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.064408 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-inventory\") pod \"75231e03-f059-43f8-8533-94035f23806f\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.064742 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-ssh-key\") pod \"75231e03-f059-43f8-8533-94035f23806f\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.064886 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn462\" (UniqueName: \"kubernetes.io/projected/75231e03-f059-43f8-8533-94035f23806f-kube-api-access-mn462\") pod \"75231e03-f059-43f8-8533-94035f23806f\" (UID: \"75231e03-f059-43f8-8533-94035f23806f\") " Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.078232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75231e03-f059-43f8-8533-94035f23806f-kube-api-access-mn462" (OuterVolumeSpecName: "kube-api-access-mn462") pod "75231e03-f059-43f8-8533-94035f23806f" (UID: "75231e03-f059-43f8-8533-94035f23806f"). InnerVolumeSpecName "kube-api-access-mn462". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.107362 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-inventory" (OuterVolumeSpecName: "inventory") pod "75231e03-f059-43f8-8533-94035f23806f" (UID: "75231e03-f059-43f8-8533-94035f23806f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.122110 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "75231e03-f059-43f8-8533-94035f23806f" (UID: "75231e03-f059-43f8-8533-94035f23806f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.168953 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn462\" (UniqueName: \"kubernetes.io/projected/75231e03-f059-43f8-8533-94035f23806f-kube-api-access-mn462\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.169111 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.169125 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75231e03-f059-43f8-8533-94035f23806f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.523377 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" event={"ID":"75231e03-f059-43f8-8533-94035f23806f","Type":"ContainerDied","Data":"716a737659e508b6093e36d5da7b35571dc685d8294d7a14e1e73aebdf23be6a"} Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.523757 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716a737659e508b6093e36d5da7b35571dc685d8294d7a14e1e73aebdf23be6a" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.523473 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mztj7" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.622609 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56"] Nov 29 07:39:01 crc kubenswrapper[4731]: E1129 07:39:01.623104 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75231e03-f059-43f8-8533-94035f23806f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.623132 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="75231e03-f059-43f8-8533-94035f23806f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.623315 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="75231e03-f059-43f8-8533-94035f23806f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.624169 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.630079 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.630185 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.630530 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.630640 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.639304 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56"] Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.679262 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54vdz\" (UniqueName: \"kubernetes.io/projected/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-kube-api-access-54vdz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.679622 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.679697 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.782951 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54vdz\" (UniqueName: \"kubernetes.io/projected/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-kube-api-access-54vdz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.783128 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.783159 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.785637 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.785944 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.800126 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.805165 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.806015 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54vdz\" (UniqueName: \"kubernetes.io/projected/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-kube-api-access-54vdz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6lr56\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.986770 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:39:01 crc kubenswrapper[4731]: I1129 07:39:01.994447 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:02 crc kubenswrapper[4731]: I1129 07:39:02.537986 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56"] Nov 29 07:39:02 crc kubenswrapper[4731]: W1129 07:39:02.548035 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cf8ce99_01a6_4737_8f9f_c0cd0c47a8ae.slice/crio-b21d66ee8386aa88f1c67b2a0b3ccd8ce2e13c4ecc4570d94e5933a1bb17241c WatchSource:0}: Error finding container b21d66ee8386aa88f1c67b2a0b3ccd8ce2e13c4ecc4570d94e5933a1bb17241c: Status 404 returned error can't find the container with id b21d66ee8386aa88f1c67b2a0b3ccd8ce2e13c4ecc4570d94e5933a1bb17241c Nov 29 07:39:03 crc kubenswrapper[4731]: I1129 07:39:03.002770 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:39:03 crc kubenswrapper[4731]: I1129 07:39:03.003120 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:39:03 crc kubenswrapper[4731]: I1129 07:39:03.042451 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:39:03 crc kubenswrapper[4731]: I1129 07:39:03.549664 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" event={"ID":"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae","Type":"ContainerStarted","Data":"dd18b1ca3f5e7d7811be42b9e2f2c031c5a490a794e10155ac81df1e7ba0ca66"} Nov 29 07:39:03 crc kubenswrapper[4731]: I1129 07:39:03.550097 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" event={"ID":"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae","Type":"ContainerStarted","Data":"b21d66ee8386aa88f1c67b2a0b3ccd8ce2e13c4ecc4570d94e5933a1bb17241c"} Nov 29 07:39:03 crc kubenswrapper[4731]: I1129 07:39:03.567991 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" podStartSLOduration=2.084167496 podStartE2EDuration="2.56797271s" podCreationTimestamp="2025-11-29 07:39:01 +0000 UTC" firstStartedPulling="2025-11-29 07:39:02.553221789 +0000 UTC m=+1981.443582892" lastFinishedPulling="2025-11-29 07:39:03.037027003 +0000 UTC m=+1981.927388106" observedRunningTime="2025-11-29 07:39:03.56587261 +0000 UTC m=+1982.456233713" watchObservedRunningTime="2025-11-29 07:39:03.56797271 +0000 UTC m=+1982.458333803" Nov 29 07:39:04 crc kubenswrapper[4731]: I1129 07:39:04.113347 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:39:04 crc kubenswrapper[4731]: I1129 07:39:04.174847 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:39:04 crc kubenswrapper[4731]: I1129 07:39:04.355130 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mww8m"] Nov 29 07:39:05 crc kubenswrapper[4731]: I1129 07:39:05.566416 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mww8m" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="registry-server" containerID="cri-o://65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e" gracePeriod=2 Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.143254 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.187790 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-catalog-content\") pod \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.188141 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-utilities\") pod \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.188241 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7x2f\" (UniqueName: \"kubernetes.io/projected/1ce842e5-ced7-45bd-8322-00f9c8418aa4-kube-api-access-t7x2f\") pod \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\" (UID: \"1ce842e5-ced7-45bd-8322-00f9c8418aa4\") " Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.189425 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-utilities" (OuterVolumeSpecName: "utilities") pod "1ce842e5-ced7-45bd-8322-00f9c8418aa4" (UID: "1ce842e5-ced7-45bd-8322-00f9c8418aa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.190516 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.204480 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce842e5-ced7-45bd-8322-00f9c8418aa4-kube-api-access-t7x2f" (OuterVolumeSpecName: "kube-api-access-t7x2f") pod "1ce842e5-ced7-45bd-8322-00f9c8418aa4" (UID: "1ce842e5-ced7-45bd-8322-00f9c8418aa4"). InnerVolumeSpecName "kube-api-access-t7x2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.291750 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7x2f\" (UniqueName: \"kubernetes.io/projected/1ce842e5-ced7-45bd-8322-00f9c8418aa4-kube-api-access-t7x2f\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.427108 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ce842e5-ced7-45bd-8322-00f9c8418aa4" (UID: "1ce842e5-ced7-45bd-8322-00f9c8418aa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.494829 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce842e5-ced7-45bd-8322-00f9c8418aa4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.581610 4731 generic.go:334] "Generic (PLEG): container finished" podID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerID="65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e" exitCode=0 Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.581668 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mww8m" event={"ID":"1ce842e5-ced7-45bd-8322-00f9c8418aa4","Type":"ContainerDied","Data":"65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e"} Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.581694 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mww8m" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.581724 4731 scope.go:117] "RemoveContainer" containerID="65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.581706 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mww8m" event={"ID":"1ce842e5-ced7-45bd-8322-00f9c8418aa4","Type":"ContainerDied","Data":"d3c2569b7f66cc8cf125f7646819fe02e1dcb6afa191453cc41ffc0063866de9"} Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.615237 4731 scope.go:117] "RemoveContainer" containerID="5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.617831 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mww8m"] Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.626365 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mww8m"] Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.640625 4731 scope.go:117] "RemoveContainer" containerID="ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.682818 4731 scope.go:117] "RemoveContainer" containerID="65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e" Nov 29 07:39:06 crc kubenswrapper[4731]: E1129 07:39:06.683358 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e\": container with ID starting with 65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e not found: ID does not exist" containerID="65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.683391 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e"} err="failed to get container status \"65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e\": rpc error: code = NotFound desc = could not find container \"65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e\": container with ID starting with 65c16b7e25a438e7104d32bdecd715b6c1a6c6ce8ef639c4cc97e9dea54cc61e not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.683416 4731 scope.go:117] "RemoveContainer" containerID="5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6" Nov 29 07:39:06 crc kubenswrapper[4731]: E1129 07:39:06.683857 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6\": container with ID starting with 5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6 not found: ID does not exist" containerID="5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.683877 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6"} err="failed to get container status \"5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6\": rpc error: code = NotFound desc = could not find container \"5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6\": container with ID starting with 5c1d833aa9b01232a4aab416f57840bd8467fb98e6ca9ae0eb078bd8389c58e6 not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.683889 4731 scope.go:117] "RemoveContainer" containerID="ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28" Nov 29 07:39:06 crc kubenswrapper[4731]: E1129 07:39:06.684147 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28\": container with ID starting with ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28 not found: ID does not exist" containerID="ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28" Nov 29 07:39:06 crc kubenswrapper[4731]: I1129 07:39:06.684178 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28"} err="failed to get container status \"ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28\": rpc error: code = NotFound desc = could not find container \"ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28\": container with ID starting with ba10060aaa364c5295941d249319042cd3e42f5d40533f94501b60fa89875e28 not found: ID does not exist" Nov 29 07:39:07 crc kubenswrapper[4731]: I1129 07:39:07.818598 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" path="/var/lib/kubelet/pods/1ce842e5-ced7-45bd-8322-00f9c8418aa4/volumes" Nov 29 07:39:20 crc kubenswrapper[4731]: I1129 07:39:20.793288 4731 scope.go:117] "RemoveContainer" containerID="c20e25c36d08973cfe616db90263e03602a800b953e23d7d34ea60864b79220d" Nov 29 07:39:20 crc kubenswrapper[4731]: I1129 07:39:20.844431 4731 scope.go:117] "RemoveContainer" containerID="ffb1b05858464ce19de4cf58d7c628a91cf5c4e1cf012ad715006a4a03dd8fde" Nov 29 07:39:20 crc kubenswrapper[4731]: I1129 07:39:20.904069 4731 scope.go:117] "RemoveContainer" containerID="daca30395d264d7b56a34706f762993415c291552b24659c942c7c81696796b9" Nov 29 07:39:33 crc kubenswrapper[4731]: I1129 07:39:33.002932 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:39:33 crc kubenswrapper[4731]: I1129 07:39:33.003761 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:39:33 crc kubenswrapper[4731]: I1129 07:39:33.003833 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:39:33 crc kubenswrapper[4731]: I1129 07:39:33.005055 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fea8fd5b340206f2b38d570102ab425e9491bb5208055282d97c11b2fcd67d4e"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:39:33 crc kubenswrapper[4731]: I1129 07:39:33.005145 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://fea8fd5b340206f2b38d570102ab425e9491bb5208055282d97c11b2fcd67d4e" gracePeriod=600 Nov 29 07:39:34 crc kubenswrapper[4731]: I1129 07:39:34.048355 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzgp"] Nov 29 07:39:34 crc kubenswrapper[4731]: I1129 07:39:34.060788 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzgp"] Nov 29 07:39:34 crc kubenswrapper[4731]: I1129 07:39:34.903386 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="fea8fd5b340206f2b38d570102ab425e9491bb5208055282d97c11b2fcd67d4e" exitCode=0 Nov 29 07:39:34 crc kubenswrapper[4731]: I1129 07:39:34.903464 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"fea8fd5b340206f2b38d570102ab425e9491bb5208055282d97c11b2fcd67d4e"} Nov 29 07:39:34 crc kubenswrapper[4731]: I1129 07:39:34.904345 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b"} Nov 29 07:39:34 crc kubenswrapper[4731]: I1129 07:39:34.904385 4731 scope.go:117] "RemoveContainer" containerID="d40688246a689bbb5c446994fa1770405fb9959f1244e12a906aea1c3d8f3b92" Nov 29 07:39:35 crc kubenswrapper[4731]: I1129 07:39:35.822288 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f635248-2bce-4e96-8d9f-3afd345c442b" path="/var/lib/kubelet/pods/3f635248-2bce-4e96-8d9f-3afd345c442b/volumes" Nov 29 07:39:45 crc kubenswrapper[4731]: I1129 07:39:45.013628 4731 generic.go:334] "Generic (PLEG): container finished" podID="3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae" containerID="dd18b1ca3f5e7d7811be42b9e2f2c031c5a490a794e10155ac81df1e7ba0ca66" exitCode=0 Nov 29 07:39:45 crc kubenswrapper[4731]: I1129 07:39:45.013715 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" event={"ID":"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae","Type":"ContainerDied","Data":"dd18b1ca3f5e7d7811be42b9e2f2c031c5a490a794e10155ac81df1e7ba0ca66"} Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.501805 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.564130 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-inventory\") pod \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.564452 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-ssh-key\") pod \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.564600 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54vdz\" (UniqueName: \"kubernetes.io/projected/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-kube-api-access-54vdz\") pod \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\" (UID: \"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae\") " Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.575201 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-kube-api-access-54vdz" (OuterVolumeSpecName: "kube-api-access-54vdz") pod "3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae" (UID: "3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae"). InnerVolumeSpecName "kube-api-access-54vdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.594874 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-inventory" (OuterVolumeSpecName: "inventory") pod "3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae" (UID: "3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.602780 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae" (UID: "3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.667028 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.667103 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:46 crc kubenswrapper[4731]: I1129 07:39:46.667116 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54vdz\" (UniqueName: \"kubernetes.io/projected/3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae-kube-api-access-54vdz\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.040150 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" event={"ID":"3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae","Type":"ContainerDied","Data":"b21d66ee8386aa88f1c67b2a0b3ccd8ce2e13c4ecc4570d94e5933a1bb17241c"} Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.040463 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21d66ee8386aa88f1c67b2a0b3ccd8ce2e13c4ecc4570d94e5933a1bb17241c" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.040263 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6lr56" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.231395 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z"] Nov 29 07:39:47 crc kubenswrapper[4731]: E1129 07:39:47.232260 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="extract-utilities" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.232286 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="extract-utilities" Nov 29 07:39:47 crc kubenswrapper[4731]: E1129 07:39:47.232299 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="registry-server" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.232307 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="registry-server" Nov 29 07:39:47 crc kubenswrapper[4731]: E1129 07:39:47.232325 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="extract-content" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.232332 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="extract-content" Nov 29 07:39:47 crc kubenswrapper[4731]: E1129 07:39:47.232369 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.232378 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.232619 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.232659 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce842e5-ced7-45bd-8322-00f9c8418aa4" containerName="registry-server" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.233648 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.239770 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.240101 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.240964 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.272161 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z"] Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.295842 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.401217 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.401296 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.401419 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfj5w\" (UniqueName: \"kubernetes.io/projected/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-kube-api-access-sfj5w\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.503307 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.503375 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.504546 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfj5w\" (UniqueName: \"kubernetes.io/projected/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-kube-api-access-sfj5w\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.509705 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.512595 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.525686 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfj5w\" (UniqueName: \"kubernetes.io/projected/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-kube-api-access-sfj5w\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:47 crc kubenswrapper[4731]: I1129 07:39:47.625271 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:39:48 crc kubenswrapper[4731]: I1129 07:39:48.203449 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z"] Nov 29 07:39:48 crc kubenswrapper[4731]: W1129 07:39:48.208972 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25edb0d1_a8a5_4577_9d0e_fb10ffc4bda5.slice/crio-bfb8ad5378d0488e6f6c85349cc401b530e24cb582fce90f4a8d722f448bd8ed WatchSource:0}: Error finding container bfb8ad5378d0488e6f6c85349cc401b530e24cb582fce90f4a8d722f448bd8ed: Status 404 returned error can't find the container with id bfb8ad5378d0488e6f6c85349cc401b530e24cb582fce90f4a8d722f448bd8ed Nov 29 07:39:49 crc kubenswrapper[4731]: I1129 07:39:49.064287 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" event={"ID":"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5","Type":"ContainerStarted","Data":"9406419df1d51f130ac72a854ac37eca823aae9b3c1b6d8d2df27feed4280a94"} Nov 29 07:39:49 crc kubenswrapper[4731]: I1129 07:39:49.064750 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" event={"ID":"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5","Type":"ContainerStarted","Data":"bfb8ad5378d0488e6f6c85349cc401b530e24cb582fce90f4a8d722f448bd8ed"} Nov 29 07:40:21 crc kubenswrapper[4731]: I1129 07:40:21.078004 4731 scope.go:117] "RemoveContainer" containerID="d42cf0d674eec20dc17238aae814a510f2d61257f4d9a174010608163b951451" Nov 29 07:40:44 crc kubenswrapper[4731]: I1129 07:40:44.738979 4731 generic.go:334] "Generic (PLEG): container finished" podID="25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5" containerID="9406419df1d51f130ac72a854ac37eca823aae9b3c1b6d8d2df27feed4280a94" exitCode=0 Nov 29 07:40:44 crc kubenswrapper[4731]: I1129 07:40:44.739069 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" event={"ID":"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5","Type":"ContainerDied","Data":"9406419df1d51f130ac72a854ac37eca823aae9b3c1b6d8d2df27feed4280a94"} Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.319742 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.491679 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-ssh-key\") pod \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.491823 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-inventory\") pod \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.492064 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfj5w\" (UniqueName: \"kubernetes.io/projected/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-kube-api-access-sfj5w\") pod \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\" (UID: \"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5\") " Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.499748 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-kube-api-access-sfj5w" (OuterVolumeSpecName: "kube-api-access-sfj5w") pod "25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5" (UID: "25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5"). InnerVolumeSpecName "kube-api-access-sfj5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.528252 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5" (UID: "25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.530612 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-inventory" (OuterVolumeSpecName: "inventory") pod "25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5" (UID: "25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.594643 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.594687 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.594700 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfj5w\" (UniqueName: \"kubernetes.io/projected/25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5-kube-api-access-sfj5w\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.765918 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" event={"ID":"25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5","Type":"ContainerDied","Data":"bfb8ad5378d0488e6f6c85349cc401b530e24cb582fce90f4a8d722f448bd8ed"} Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.765970 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb8ad5378d0488e6f6c85349cc401b530e24cb582fce90f4a8d722f448bd8ed" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.766020 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.943204 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lmbs9"] Nov 29 07:40:46 crc kubenswrapper[4731]: E1129 07:40:46.943969 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.943995 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.944271 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.945248 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.948486 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.948739 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.948953 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.949175 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:40:46 crc kubenswrapper[4731]: I1129 07:40:46.954326 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lmbs9"] Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.117262 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.117958 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9spv\" (UniqueName: \"kubernetes.io/projected/6ade5882-f94b-4588-887c-5510346c10cc-kube-api-access-n9spv\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.118247 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.222596 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9spv\" (UniqueName: \"kubernetes.io/projected/6ade5882-f94b-4588-887c-5510346c10cc-kube-api-access-n9spv\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.222781 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.223463 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.230200 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.240371 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.242989 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9spv\" (UniqueName: \"kubernetes.io/projected/6ade5882-f94b-4588-887c-5510346c10cc-kube-api-access-n9spv\") pod \"ssh-known-hosts-edpm-deployment-lmbs9\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.267890 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.902575 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lmbs9"] Nov 29 07:40:47 crc kubenswrapper[4731]: I1129 07:40:47.907253 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:40:48 crc kubenswrapper[4731]: I1129 07:40:48.787666 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" event={"ID":"6ade5882-f94b-4588-887c-5510346c10cc","Type":"ContainerStarted","Data":"8b3998186db65fef6b2d683383d35dc38a53c67469751ed35647fa630a17a9c4"} Nov 29 07:40:48 crc kubenswrapper[4731]: I1129 07:40:48.788250 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" event={"ID":"6ade5882-f94b-4588-887c-5510346c10cc","Type":"ContainerStarted","Data":"595f47aabb50f5c2ba7f3e714ca01cf7956595e7ca7fce4b43cab53c445cbb48"} Nov 29 07:40:48 crc kubenswrapper[4731]: I1129 07:40:48.820741 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" podStartSLOduration=2.308049835 podStartE2EDuration="2.820710287s" podCreationTimestamp="2025-11-29 07:40:46 +0000 UTC" firstStartedPulling="2025-11-29 07:40:47.906987836 +0000 UTC m=+2086.797348939" lastFinishedPulling="2025-11-29 07:40:48.419648288 +0000 UTC m=+2087.310009391" observedRunningTime="2025-11-29 07:40:48.81211918 +0000 UTC m=+2087.702480283" watchObservedRunningTime="2025-11-29 07:40:48.820710287 +0000 UTC m=+2087.711071390" Nov 29 07:40:56 crc kubenswrapper[4731]: I1129 07:40:56.879032 4731 generic.go:334] "Generic (PLEG): container finished" podID="6ade5882-f94b-4588-887c-5510346c10cc" containerID="8b3998186db65fef6b2d683383d35dc38a53c67469751ed35647fa630a17a9c4" exitCode=0 Nov 29 07:40:56 crc kubenswrapper[4731]: I1129 07:40:56.879789 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" event={"ID":"6ade5882-f94b-4588-887c-5510346c10cc","Type":"ContainerDied","Data":"8b3998186db65fef6b2d683383d35dc38a53c67469751ed35647fa630a17a9c4"} Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.378313 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.508092 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9spv\" (UniqueName: \"kubernetes.io/projected/6ade5882-f94b-4588-887c-5510346c10cc-kube-api-access-n9spv\") pod \"6ade5882-f94b-4588-887c-5510346c10cc\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.508232 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-inventory-0\") pod \"6ade5882-f94b-4588-887c-5510346c10cc\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.508348 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-ssh-key-openstack-edpm-ipam\") pod \"6ade5882-f94b-4588-887c-5510346c10cc\" (UID: \"6ade5882-f94b-4588-887c-5510346c10cc\") " Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.517941 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ade5882-f94b-4588-887c-5510346c10cc-kube-api-access-n9spv" (OuterVolumeSpecName: "kube-api-access-n9spv") pod "6ade5882-f94b-4588-887c-5510346c10cc" (UID: "6ade5882-f94b-4588-887c-5510346c10cc"). InnerVolumeSpecName "kube-api-access-n9spv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.542966 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6ade5882-f94b-4588-887c-5510346c10cc" (UID: "6ade5882-f94b-4588-887c-5510346c10cc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.543312 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ade5882-f94b-4588-887c-5510346c10cc" (UID: "6ade5882-f94b-4588-887c-5510346c10cc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.611752 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9spv\" (UniqueName: \"kubernetes.io/projected/6ade5882-f94b-4588-887c-5510346c10cc-kube-api-access-n9spv\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.612037 4731 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.612117 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ade5882-f94b-4588-887c-5510346c10cc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.901398 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" event={"ID":"6ade5882-f94b-4588-887c-5510346c10cc","Type":"ContainerDied","Data":"595f47aabb50f5c2ba7f3e714ca01cf7956595e7ca7fce4b43cab53c445cbb48"} Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.902277 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595f47aabb50f5c2ba7f3e714ca01cf7956595e7ca7fce4b43cab53c445cbb48" Nov 29 07:40:58 crc kubenswrapper[4731]: I1129 07:40:58.901486 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lmbs9" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.007692 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb"] Nov 29 07:40:59 crc kubenswrapper[4731]: E1129 07:40:59.013073 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ade5882-f94b-4588-887c-5510346c10cc" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.013132 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ade5882-f94b-4588-887c-5510346c10cc" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.013558 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ade5882-f94b-4588-887c-5510346c10cc" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.014614 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.017868 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.018485 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.018714 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.019623 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.020891 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb"] Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.125967 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.126807 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tksb2\" (UniqueName: \"kubernetes.io/projected/33ad76bc-c3c7-47e2-9c32-77dd670cf832-kube-api-access-tksb2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.127037 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.229438 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.230244 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tksb2\" (UniqueName: \"kubernetes.io/projected/33ad76bc-c3c7-47e2-9c32-77dd670cf832-kube-api-access-tksb2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.230521 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.237136 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.237365 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.255590 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tksb2\" (UniqueName: \"kubernetes.io/projected/33ad76bc-c3c7-47e2-9c32-77dd670cf832-kube-api-access-tksb2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-c2dpb\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.335798 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:40:59 crc kubenswrapper[4731]: I1129 07:40:59.913308 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb"] Nov 29 07:40:59 crc kubenswrapper[4731]: W1129 07:40:59.919992 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33ad76bc_c3c7_47e2_9c32_77dd670cf832.slice/crio-f56ddc63607192686abacfde7954bf0f169141cadfab34785df42b300f599423 WatchSource:0}: Error finding container f56ddc63607192686abacfde7954bf0f169141cadfab34785df42b300f599423: Status 404 returned error can't find the container with id f56ddc63607192686abacfde7954bf0f169141cadfab34785df42b300f599423 Nov 29 07:41:00 crc kubenswrapper[4731]: I1129 07:41:00.926809 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" event={"ID":"33ad76bc-c3c7-47e2-9c32-77dd670cf832","Type":"ContainerStarted","Data":"cc5a5c229e31f53288b1fc0e5bd0fd28c5e81f13b0e6b01100755d4491c863bf"} Nov 29 07:41:00 crc kubenswrapper[4731]: I1129 07:41:00.927348 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" event={"ID":"33ad76bc-c3c7-47e2-9c32-77dd670cf832","Type":"ContainerStarted","Data":"f56ddc63607192686abacfde7954bf0f169141cadfab34785df42b300f599423"} Nov 29 07:41:00 crc kubenswrapper[4731]: I1129 07:41:00.958028 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" podStartSLOduration=2.373990348 podStartE2EDuration="2.958003468s" podCreationTimestamp="2025-11-29 07:40:58 +0000 UTC" firstStartedPulling="2025-11-29 07:40:59.927226797 +0000 UTC m=+2098.817587900" lastFinishedPulling="2025-11-29 07:41:00.511239917 +0000 UTC m=+2099.401601020" observedRunningTime="2025-11-29 07:41:00.949151424 +0000 UTC m=+2099.839512527" watchObservedRunningTime="2025-11-29 07:41:00.958003468 +0000 UTC m=+2099.848364571" Nov 29 07:41:10 crc kubenswrapper[4731]: I1129 07:41:10.033613 4731 generic.go:334] "Generic (PLEG): container finished" podID="33ad76bc-c3c7-47e2-9c32-77dd670cf832" containerID="cc5a5c229e31f53288b1fc0e5bd0fd28c5e81f13b0e6b01100755d4491c863bf" exitCode=0 Nov 29 07:41:10 crc kubenswrapper[4731]: I1129 07:41:10.033695 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" event={"ID":"33ad76bc-c3c7-47e2-9c32-77dd670cf832","Type":"ContainerDied","Data":"cc5a5c229e31f53288b1fc0e5bd0fd28c5e81f13b0e6b01100755d4491c863bf"} Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.486964 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.601010 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-ssh-key\") pod \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.601190 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-inventory\") pod \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.601410 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tksb2\" (UniqueName: \"kubernetes.io/projected/33ad76bc-c3c7-47e2-9c32-77dd670cf832-kube-api-access-tksb2\") pod \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\" (UID: \"33ad76bc-c3c7-47e2-9c32-77dd670cf832\") " Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.608500 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ad76bc-c3c7-47e2-9c32-77dd670cf832-kube-api-access-tksb2" (OuterVolumeSpecName: "kube-api-access-tksb2") pod "33ad76bc-c3c7-47e2-9c32-77dd670cf832" (UID: "33ad76bc-c3c7-47e2-9c32-77dd670cf832"). InnerVolumeSpecName "kube-api-access-tksb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.634178 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "33ad76bc-c3c7-47e2-9c32-77dd670cf832" (UID: "33ad76bc-c3c7-47e2-9c32-77dd670cf832"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.646776 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-inventory" (OuterVolumeSpecName: "inventory") pod "33ad76bc-c3c7-47e2-9c32-77dd670cf832" (UID: "33ad76bc-c3c7-47e2-9c32-77dd670cf832"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.704415 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tksb2\" (UniqueName: \"kubernetes.io/projected/33ad76bc-c3c7-47e2-9c32-77dd670cf832-kube-api-access-tksb2\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.704714 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:11 crc kubenswrapper[4731]: I1129 07:41:11.704795 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ad76bc-c3c7-47e2-9c32-77dd670cf832-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.063483 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" event={"ID":"33ad76bc-c3c7-47e2-9c32-77dd670cf832","Type":"ContainerDied","Data":"f56ddc63607192686abacfde7954bf0f169141cadfab34785df42b300f599423"} Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.063538 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f56ddc63607192686abacfde7954bf0f169141cadfab34785df42b300f599423" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.063616 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-c2dpb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.140313 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb"] Nov 29 07:41:12 crc kubenswrapper[4731]: E1129 07:41:12.141012 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ad76bc-c3c7-47e2-9c32-77dd670cf832" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.141038 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ad76bc-c3c7-47e2-9c32-77dd670cf832" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.141309 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ad76bc-c3c7-47e2-9c32-77dd670cf832" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.142281 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.150852 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.151134 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.151194 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.151438 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.151848 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb"] Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.216970 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.217433 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h6b2\" (UniqueName: \"kubernetes.io/projected/350ab6c4-0e67-42b4-8f98-ee4c319198e6-kube-api-access-5h6b2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.217538 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.320241 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.320460 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h6b2\" (UniqueName: \"kubernetes.io/projected/350ab6c4-0e67-42b4-8f98-ee4c319198e6-kube-api-access-5h6b2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.320513 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.325159 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.330487 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.338440 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h6b2\" (UniqueName: \"kubernetes.io/projected/350ab6c4-0e67-42b4-8f98-ee4c319198e6-kube-api-access-5h6b2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:12 crc kubenswrapper[4731]: I1129 07:41:12.476861 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:13 crc kubenswrapper[4731]: I1129 07:41:13.100700 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb"] Nov 29 07:41:14 crc kubenswrapper[4731]: I1129 07:41:14.100013 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" event={"ID":"350ab6c4-0e67-42b4-8f98-ee4c319198e6","Type":"ContainerStarted","Data":"f4615e3cba2e80413936b66601485c1d8a78639a37eea411ea70ba55be39de2a"} Nov 29 07:41:16 crc kubenswrapper[4731]: I1129 07:41:16.124967 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" event={"ID":"350ab6c4-0e67-42b4-8f98-ee4c319198e6","Type":"ContainerStarted","Data":"803391e0b170787e09658d6c0f55befd2fbca3237aeb1d25873a987694f50dca"} Nov 29 07:41:16 crc kubenswrapper[4731]: I1129 07:41:16.157921 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" podStartSLOduration=2.349332205 podStartE2EDuration="4.157898886s" podCreationTimestamp="2025-11-29 07:41:12 +0000 UTC" firstStartedPulling="2025-11-29 07:41:13.108237248 +0000 UTC m=+2111.998598351" lastFinishedPulling="2025-11-29 07:41:14.916803929 +0000 UTC m=+2113.807165032" observedRunningTime="2025-11-29 07:41:16.154362764 +0000 UTC m=+2115.044723887" watchObservedRunningTime="2025-11-29 07:41:16.157898886 +0000 UTC m=+2115.048259979" Nov 29 07:41:27 crc kubenswrapper[4731]: I1129 07:41:27.247944 4731 generic.go:334] "Generic (PLEG): container finished" podID="350ab6c4-0e67-42b4-8f98-ee4c319198e6" containerID="803391e0b170787e09658d6c0f55befd2fbca3237aeb1d25873a987694f50dca" exitCode=0 Nov 29 07:41:27 crc kubenswrapper[4731]: I1129 07:41:27.248047 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" event={"ID":"350ab6c4-0e67-42b4-8f98-ee4c319198e6","Type":"ContainerDied","Data":"803391e0b170787e09658d6c0f55befd2fbca3237aeb1d25873a987694f50dca"} Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.725987 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.852284 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h6b2\" (UniqueName: \"kubernetes.io/projected/350ab6c4-0e67-42b4-8f98-ee4c319198e6-kube-api-access-5h6b2\") pod \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.853035 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-ssh-key\") pod \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.853157 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-inventory\") pod \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\" (UID: \"350ab6c4-0e67-42b4-8f98-ee4c319198e6\") " Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.876772 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350ab6c4-0e67-42b4-8f98-ee4c319198e6-kube-api-access-5h6b2" (OuterVolumeSpecName: "kube-api-access-5h6b2") pod "350ab6c4-0e67-42b4-8f98-ee4c319198e6" (UID: "350ab6c4-0e67-42b4-8f98-ee4c319198e6"). InnerVolumeSpecName "kube-api-access-5h6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.892430 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-inventory" (OuterVolumeSpecName: "inventory") pod "350ab6c4-0e67-42b4-8f98-ee4c319198e6" (UID: "350ab6c4-0e67-42b4-8f98-ee4c319198e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.892476 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "350ab6c4-0e67-42b4-8f98-ee4c319198e6" (UID: "350ab6c4-0e67-42b4-8f98-ee4c319198e6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.955389 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h6b2\" (UniqueName: \"kubernetes.io/projected/350ab6c4-0e67-42b4-8f98-ee4c319198e6-kube-api-access-5h6b2\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.955435 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:28 crc kubenswrapper[4731]: I1129 07:41:28.955450 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350ab6c4-0e67-42b4-8f98-ee4c319198e6-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.271266 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" event={"ID":"350ab6c4-0e67-42b4-8f98-ee4c319198e6","Type":"ContainerDied","Data":"f4615e3cba2e80413936b66601485c1d8a78639a37eea411ea70ba55be39de2a"} Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.271322 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4615e3cba2e80413936b66601485c1d8a78639a37eea411ea70ba55be39de2a" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.271731 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.413649 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx"] Nov 29 07:41:29 crc kubenswrapper[4731]: E1129 07:41:29.414319 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350ab6c4-0e67-42b4-8f98-ee4c319198e6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.414344 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="350ab6c4-0e67-42b4-8f98-ee4c319198e6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.415702 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="350ab6c4-0e67-42b4-8f98-ee4c319198e6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.417487 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.421795 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.422146 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.422493 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.422942 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.423144 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.423313 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.423483 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.423692 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.429454 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx"] Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573100 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573185 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rvhd\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-kube-api-access-4rvhd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573257 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573298 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573367 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573394 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573425 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573458 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573492 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573536 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.573591 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.574824 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.574907 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.575029 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.676708 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.676785 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.676827 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.676873 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.676907 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rvhd\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-kube-api-access-4rvhd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.676950 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.677023 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.677069 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.677098 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.677127 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.677158 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.677185 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.677212 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.677233 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.684775 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.684805 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.684696 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.685842 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.686450 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.687061 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.689288 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.690133 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.691775 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.693128 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.695200 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.696911 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.698261 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.700923 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rvhd\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-kube-api-access-4rvhd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:29 crc kubenswrapper[4731]: I1129 07:41:29.749467 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:41:30 crc kubenswrapper[4731]: I1129 07:41:30.376141 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx"] Nov 29 07:41:31 crc kubenswrapper[4731]: I1129 07:41:31.294423 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" event={"ID":"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb","Type":"ContainerStarted","Data":"63902da735981e864223c81900f15c8fa0068ded566e3e5f9a4bdb13b06c147f"} Nov 29 07:41:34 crc kubenswrapper[4731]: I1129 07:41:34.324591 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" event={"ID":"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb","Type":"ContainerStarted","Data":"681cf6aa20015891581c4457073fb4f49a1ddcab8ab1c368ece8e3c734ee3bbf"} Nov 29 07:41:34 crc kubenswrapper[4731]: I1129 07:41:34.358711 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" podStartSLOduration=2.862434777 podStartE2EDuration="5.358689843s" podCreationTimestamp="2025-11-29 07:41:29 +0000 UTC" firstStartedPulling="2025-11-29 07:41:30.371504921 +0000 UTC m=+2129.261866024" lastFinishedPulling="2025-11-29 07:41:32.867759977 +0000 UTC m=+2131.758121090" observedRunningTime="2025-11-29 07:41:34.350098097 +0000 UTC m=+2133.240459200" watchObservedRunningTime="2025-11-29 07:41:34.358689843 +0000 UTC m=+2133.249050946" Nov 29 07:42:03 crc kubenswrapper[4731]: I1129 07:42:03.002121 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:42:03 crc kubenswrapper[4731]: I1129 07:42:03.002605 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:42:15 crc kubenswrapper[4731]: I1129 07:42:15.748106 4731 generic.go:334] "Generic (PLEG): container finished" podID="5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" containerID="681cf6aa20015891581c4457073fb4f49a1ddcab8ab1c368ece8e3c734ee3bbf" exitCode=0 Nov 29 07:42:15 crc kubenswrapper[4731]: I1129 07:42:15.748208 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" event={"ID":"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb","Type":"ContainerDied","Data":"681cf6aa20015891581c4457073fb4f49a1ddcab8ab1c368ece8e3c734ee3bbf"} Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.225600 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285071 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-bootstrap-combined-ca-bundle\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285132 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285197 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-repo-setup-combined-ca-bundle\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285230 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-neutron-metadata-combined-ca-bundle\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285280 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ssh-key\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285374 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-inventory\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285403 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285445 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ovn-combined-ca-bundle\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285462 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-libvirt-combined-ca-bundle\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285495 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rvhd\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-kube-api-access-4rvhd\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285557 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285594 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285644 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-telemetry-combined-ca-bundle\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.285667 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-nova-combined-ca-bundle\") pod \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\" (UID: \"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb\") " Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.292508 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-kube-api-access-4rvhd" (OuterVolumeSpecName: "kube-api-access-4rvhd") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "kube-api-access-4rvhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.293039 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.297012 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.297694 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.298471 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.298930 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.298933 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.300164 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.300263 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.307624 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.307757 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.308535 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.328020 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.335341 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-inventory" (OuterVolumeSpecName: "inventory") pod "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" (UID: "5e08a4ae-50ef-4cf9-97a8-bc09c1896afb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388614 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388651 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388670 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388687 4731 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388698 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rvhd\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-kube-api-access-4rvhd\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388740 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388756 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388768 4731 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388783 4731 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388819 4731 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388831 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388845 4731 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388857 4731 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.388869 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e08a4ae-50ef-4cf9-97a8-bc09c1896afb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.772782 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" event={"ID":"5e08a4ae-50ef-4cf9-97a8-bc09c1896afb","Type":"ContainerDied","Data":"63902da735981e864223c81900f15c8fa0068ded566e3e5f9a4bdb13b06c147f"} Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.772842 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63902da735981e864223c81900f15c8fa0068ded566e3e5f9a4bdb13b06c147f" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.772906 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.894683 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg"] Nov 29 07:42:17 crc kubenswrapper[4731]: E1129 07:42:17.895263 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.895293 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.896320 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e08a4ae-50ef-4cf9-97a8-bc09c1896afb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.897516 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.910420 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.910818 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.910941 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.911063 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.911260 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:42:17 crc kubenswrapper[4731]: I1129 07:42:17.923814 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg"] Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.001515 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.002107 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.002206 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.002245 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.002306 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnmpv\" (UniqueName: \"kubernetes.io/projected/8999dce1-1af7-47d6-95cc-a19af53ce54a-kube-api-access-lnmpv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.104320 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnmpv\" (UniqueName: \"kubernetes.io/projected/8999dce1-1af7-47d6-95cc-a19af53ce54a-kube-api-access-lnmpv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.104425 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.104577 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.104659 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.104699 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.105722 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.110155 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.110169 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.110210 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.124946 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnmpv\" (UniqueName: \"kubernetes.io/projected/8999dce1-1af7-47d6-95cc-a19af53ce54a-kube-api-access-lnmpv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9dtdg\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.221693 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.724917 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg"] Nov 29 07:42:18 crc kubenswrapper[4731]: I1129 07:42:18.783467 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" event={"ID":"8999dce1-1af7-47d6-95cc-a19af53ce54a","Type":"ContainerStarted","Data":"dcd61451772fa063a36c311f1f7c71e02fb3aab51a1ad01e6fb936455b24229a"} Nov 29 07:42:19 crc kubenswrapper[4731]: I1129 07:42:19.796251 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" event={"ID":"8999dce1-1af7-47d6-95cc-a19af53ce54a","Type":"ContainerStarted","Data":"451502f07027c4733389e91bc3c8e42854ce9cbeb381e4dfec5791186666b7d3"} Nov 29 07:42:19 crc kubenswrapper[4731]: I1129 07:42:19.847230 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" podStartSLOduration=2.29396611 podStartE2EDuration="2.847198845s" podCreationTimestamp="2025-11-29 07:42:17 +0000 UTC" firstStartedPulling="2025-11-29 07:42:18.734725202 +0000 UTC m=+2177.625086295" lastFinishedPulling="2025-11-29 07:42:19.287957927 +0000 UTC m=+2178.178319030" observedRunningTime="2025-11-29 07:42:19.820279444 +0000 UTC m=+2178.710640537" watchObservedRunningTime="2025-11-29 07:42:19.847198845 +0000 UTC m=+2178.737559948" Nov 29 07:42:33 crc kubenswrapper[4731]: I1129 07:42:33.004775 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:42:33 crc kubenswrapper[4731]: I1129 07:42:33.005698 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.003182 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.004124 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.004203 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.005561 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.005668 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" gracePeriod=600 Nov 29 07:43:03 crc kubenswrapper[4731]: E1129 07:43:03.137265 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.272544 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" exitCode=0 Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.272615 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b"} Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.272738 4731 scope.go:117] "RemoveContainer" containerID="fea8fd5b340206f2b38d570102ab425e9491bb5208055282d97c11b2fcd67d4e" Nov 29 07:43:03 crc kubenswrapper[4731]: I1129 07:43:03.273583 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:43:03 crc kubenswrapper[4731]: E1129 07:43:03.274073 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:43:16 crc kubenswrapper[4731]: I1129 07:43:16.807493 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:43:16 crc kubenswrapper[4731]: E1129 07:43:16.808336 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:43:29 crc kubenswrapper[4731]: I1129 07:43:29.807626 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:43:29 crc kubenswrapper[4731]: E1129 07:43:29.808627 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:43:30 crc kubenswrapper[4731]: I1129 07:43:30.576314 4731 generic.go:334] "Generic (PLEG): container finished" podID="8999dce1-1af7-47d6-95cc-a19af53ce54a" containerID="451502f07027c4733389e91bc3c8e42854ce9cbeb381e4dfec5791186666b7d3" exitCode=0 Nov 29 07:43:30 crc kubenswrapper[4731]: I1129 07:43:30.576518 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" event={"ID":"8999dce1-1af7-47d6-95cc-a19af53ce54a","Type":"ContainerDied","Data":"451502f07027c4733389e91bc3c8e42854ce9cbeb381e4dfec5791186666b7d3"} Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.055855 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.189260 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-inventory\") pod \"8999dce1-1af7-47d6-95cc-a19af53ce54a\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.189351 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnmpv\" (UniqueName: \"kubernetes.io/projected/8999dce1-1af7-47d6-95cc-a19af53ce54a-kube-api-access-lnmpv\") pod \"8999dce1-1af7-47d6-95cc-a19af53ce54a\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.189405 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovncontroller-config-0\") pod \"8999dce1-1af7-47d6-95cc-a19af53ce54a\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.189487 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovn-combined-ca-bundle\") pod \"8999dce1-1af7-47d6-95cc-a19af53ce54a\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.189641 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ssh-key\") pod \"8999dce1-1af7-47d6-95cc-a19af53ce54a\" (UID: \"8999dce1-1af7-47d6-95cc-a19af53ce54a\") " Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.200863 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8999dce1-1af7-47d6-95cc-a19af53ce54a" (UID: "8999dce1-1af7-47d6-95cc-a19af53ce54a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.208819 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8999dce1-1af7-47d6-95cc-a19af53ce54a-kube-api-access-lnmpv" (OuterVolumeSpecName: "kube-api-access-lnmpv") pod "8999dce1-1af7-47d6-95cc-a19af53ce54a" (UID: "8999dce1-1af7-47d6-95cc-a19af53ce54a"). InnerVolumeSpecName "kube-api-access-lnmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.245865 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-inventory" (OuterVolumeSpecName: "inventory") pod "8999dce1-1af7-47d6-95cc-a19af53ce54a" (UID: "8999dce1-1af7-47d6-95cc-a19af53ce54a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.266419 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8999dce1-1af7-47d6-95cc-a19af53ce54a" (UID: "8999dce1-1af7-47d6-95cc-a19af53ce54a"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.278958 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8999dce1-1af7-47d6-95cc-a19af53ce54a" (UID: "8999dce1-1af7-47d6-95cc-a19af53ce54a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.292341 4731 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.292422 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.292438 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.292449 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8999dce1-1af7-47d6-95cc-a19af53ce54a-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.292500 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnmpv\" (UniqueName: \"kubernetes.io/projected/8999dce1-1af7-47d6-95cc-a19af53ce54a-kube-api-access-lnmpv\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.595144 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" event={"ID":"8999dce1-1af7-47d6-95cc-a19af53ce54a","Type":"ContainerDied","Data":"dcd61451772fa063a36c311f1f7c71e02fb3aab51a1ad01e6fb936455b24229a"} Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.595555 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd61451772fa063a36c311f1f7c71e02fb3aab51a1ad01e6fb936455b24229a" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.595198 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9dtdg" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.828404 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb"] Nov 29 07:43:32 crc kubenswrapper[4731]: E1129 07:43:32.828860 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8999dce1-1af7-47d6-95cc-a19af53ce54a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.828875 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8999dce1-1af7-47d6-95cc-a19af53ce54a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.829058 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8999dce1-1af7-47d6-95cc-a19af53ce54a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.829776 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.832124 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.833209 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.833423 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.833728 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.834155 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.838098 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.848285 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb"] Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.904133 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.904193 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.904230 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgqs\" (UniqueName: \"kubernetes.io/projected/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-kube-api-access-xqgqs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.904873 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.904963 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:32 crc kubenswrapper[4731]: I1129 07:43:32.905005 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.007423 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.007490 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.007528 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.007678 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.007723 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.007763 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgqs\" (UniqueName: \"kubernetes.io/projected/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-kube-api-access-xqgqs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.014598 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.014881 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.015627 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.015667 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.027296 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.042585 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgqs\" (UniqueName: \"kubernetes.io/projected/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-kube-api-access-xqgqs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.156688 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.515431 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb"] Nov 29 07:43:33 crc kubenswrapper[4731]: I1129 07:43:33.604813 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" event={"ID":"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed","Type":"ContainerStarted","Data":"8fdb8d737900cd59586189843f4baaeef84b2d9944bd8875b3072b8372d81753"} Nov 29 07:43:34 crc kubenswrapper[4731]: I1129 07:43:34.632646 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" event={"ID":"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed","Type":"ContainerStarted","Data":"6f4fce238a9e108c85a1da08f7d8e28741f8336f3ac24ad938482e43760a16e6"} Nov 29 07:43:34 crc kubenswrapper[4731]: I1129 07:43:34.659395 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" podStartSLOduration=2.188554762 podStartE2EDuration="2.659368458s" podCreationTimestamp="2025-11-29 07:43:32 +0000 UTC" firstStartedPulling="2025-11-29 07:43:33.522900528 +0000 UTC m=+2252.413261631" lastFinishedPulling="2025-11-29 07:43:33.993714224 +0000 UTC m=+2252.884075327" observedRunningTime="2025-11-29 07:43:34.651781281 +0000 UTC m=+2253.542142384" watchObservedRunningTime="2025-11-29 07:43:34.659368458 +0000 UTC m=+2253.549729561" Nov 29 07:43:41 crc kubenswrapper[4731]: I1129 07:43:41.815201 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:43:41 crc kubenswrapper[4731]: E1129 07:43:41.816137 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:43:47 crc kubenswrapper[4731]: I1129 07:43:47.820739 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cfcf7"] Nov 29 07:43:47 crc kubenswrapper[4731]: I1129 07:43:47.824821 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:47 crc kubenswrapper[4731]: I1129 07:43:47.840641 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfcf7"] Nov 29 07:43:47 crc kubenswrapper[4731]: I1129 07:43:47.947371 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f98h7\" (UniqueName: \"kubernetes.io/projected/db713681-3e6e-42f3-83ce-fbf516f84df5-kube-api-access-f98h7\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:47 crc kubenswrapper[4731]: I1129 07:43:47.947531 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-catalog-content\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:47 crc kubenswrapper[4731]: I1129 07:43:47.947690 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-utilities\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.049346 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-utilities\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.049482 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f98h7\" (UniqueName: \"kubernetes.io/projected/db713681-3e6e-42f3-83ce-fbf516f84df5-kube-api-access-f98h7\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.049626 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-catalog-content\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.049957 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-utilities\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.050680 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-catalog-content\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.070489 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f98h7\" (UniqueName: \"kubernetes.io/projected/db713681-3e6e-42f3-83ce-fbf516f84df5-kube-api-access-f98h7\") pod \"certified-operators-cfcf7\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.152992 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.695208 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfcf7"] Nov 29 07:43:48 crc kubenswrapper[4731]: I1129 07:43:48.784615 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfcf7" event={"ID":"db713681-3e6e-42f3-83ce-fbf516f84df5","Type":"ContainerStarted","Data":"ea53117d80498ab4a0d1a005a877277373d0a33128a7518be6a621b5221eb4cf"} Nov 29 07:43:49 crc kubenswrapper[4731]: I1129 07:43:49.799941 4731 generic.go:334] "Generic (PLEG): container finished" podID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerID="87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49" exitCode=0 Nov 29 07:43:49 crc kubenswrapper[4731]: I1129 07:43:49.800083 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfcf7" event={"ID":"db713681-3e6e-42f3-83ce-fbf516f84df5","Type":"ContainerDied","Data":"87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49"} Nov 29 07:43:50 crc kubenswrapper[4731]: I1129 07:43:50.810814 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfcf7" event={"ID":"db713681-3e6e-42f3-83ce-fbf516f84df5","Type":"ContainerStarted","Data":"152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237"} Nov 29 07:43:51 crc kubenswrapper[4731]: I1129 07:43:51.828080 4731 generic.go:334] "Generic (PLEG): container finished" podID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerID="152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237" exitCode=0 Nov 29 07:43:51 crc kubenswrapper[4731]: I1129 07:43:51.828278 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfcf7" event={"ID":"db713681-3e6e-42f3-83ce-fbf516f84df5","Type":"ContainerDied","Data":"152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237"} Nov 29 07:43:52 crc kubenswrapper[4731]: I1129 07:43:52.807193 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:43:52 crc kubenswrapper[4731]: E1129 07:43:52.808042 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:43:53 crc kubenswrapper[4731]: I1129 07:43:53.859048 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfcf7" event={"ID":"db713681-3e6e-42f3-83ce-fbf516f84df5","Type":"ContainerStarted","Data":"70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92"} Nov 29 07:43:53 crc kubenswrapper[4731]: I1129 07:43:53.898829 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cfcf7" podStartSLOduration=4.094892985 podStartE2EDuration="6.898803152s" podCreationTimestamp="2025-11-29 07:43:47 +0000 UTC" firstStartedPulling="2025-11-29 07:43:49.8020803 +0000 UTC m=+2268.692441413" lastFinishedPulling="2025-11-29 07:43:52.605990477 +0000 UTC m=+2271.496351580" observedRunningTime="2025-11-29 07:43:53.882187096 +0000 UTC m=+2272.772548239" watchObservedRunningTime="2025-11-29 07:43:53.898803152 +0000 UTC m=+2272.789164295" Nov 29 07:43:58 crc kubenswrapper[4731]: I1129 07:43:58.154172 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:58 crc kubenswrapper[4731]: I1129 07:43:58.154474 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:58 crc kubenswrapper[4731]: I1129 07:43:58.204827 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:58 crc kubenswrapper[4731]: I1129 07:43:58.963727 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:43:59 crc kubenswrapper[4731]: I1129 07:43:59.018301 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfcf7"] Nov 29 07:44:00 crc kubenswrapper[4731]: I1129 07:44:00.929867 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cfcf7" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerName="registry-server" containerID="cri-o://70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92" gracePeriod=2 Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.495752 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.597959 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f98h7\" (UniqueName: \"kubernetes.io/projected/db713681-3e6e-42f3-83ce-fbf516f84df5-kube-api-access-f98h7\") pod \"db713681-3e6e-42f3-83ce-fbf516f84df5\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.598236 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-utilities\") pod \"db713681-3e6e-42f3-83ce-fbf516f84df5\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.598284 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-catalog-content\") pod \"db713681-3e6e-42f3-83ce-fbf516f84df5\" (UID: \"db713681-3e6e-42f3-83ce-fbf516f84df5\") " Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.599673 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-utilities" (OuterVolumeSpecName: "utilities") pod "db713681-3e6e-42f3-83ce-fbf516f84df5" (UID: "db713681-3e6e-42f3-83ce-fbf516f84df5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.606105 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db713681-3e6e-42f3-83ce-fbf516f84df5-kube-api-access-f98h7" (OuterVolumeSpecName: "kube-api-access-f98h7") pod "db713681-3e6e-42f3-83ce-fbf516f84df5" (UID: "db713681-3e6e-42f3-83ce-fbf516f84df5"). InnerVolumeSpecName "kube-api-access-f98h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.650386 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db713681-3e6e-42f3-83ce-fbf516f84df5" (UID: "db713681-3e6e-42f3-83ce-fbf516f84df5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.700254 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.700306 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db713681-3e6e-42f3-83ce-fbf516f84df5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.700323 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f98h7\" (UniqueName: \"kubernetes.io/projected/db713681-3e6e-42f3-83ce-fbf516f84df5-kube-api-access-f98h7\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.954965 4731 generic.go:334] "Generic (PLEG): container finished" podID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerID="70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92" exitCode=0 Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.955102 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfcf7" event={"ID":"db713681-3e6e-42f3-83ce-fbf516f84df5","Type":"ContainerDied","Data":"70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92"} Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.955131 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfcf7" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.955247 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfcf7" event={"ID":"db713681-3e6e-42f3-83ce-fbf516f84df5","Type":"ContainerDied","Data":"ea53117d80498ab4a0d1a005a877277373d0a33128a7518be6a621b5221eb4cf"} Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.955286 4731 scope.go:117] "RemoveContainer" containerID="70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92" Nov 29 07:44:02 crc kubenswrapper[4731]: I1129 07:44:02.988640 4731 scope.go:117] "RemoveContainer" containerID="152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.016741 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfcf7"] Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.025275 4731 scope.go:117] "RemoveContainer" containerID="87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.027751 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cfcf7"] Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.070518 4731 scope.go:117] "RemoveContainer" containerID="70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92" Nov 29 07:44:03 crc kubenswrapper[4731]: E1129 07:44:03.071297 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92\": container with ID starting with 70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92 not found: ID does not exist" containerID="70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.071398 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92"} err="failed to get container status \"70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92\": rpc error: code = NotFound desc = could not find container \"70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92\": container with ID starting with 70e8389e9c1d9eb00913e8a27fbfa4dd49babf8e9ccda55abf2adb4505cd9a92 not found: ID does not exist" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.071442 4731 scope.go:117] "RemoveContainer" containerID="152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237" Nov 29 07:44:03 crc kubenswrapper[4731]: E1129 07:44:03.072231 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237\": container with ID starting with 152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237 not found: ID does not exist" containerID="152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.072274 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237"} err="failed to get container status \"152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237\": rpc error: code = NotFound desc = could not find container \"152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237\": container with ID starting with 152a568866546f8105bd2e01912ccfd81a7d5ff8a78fc2f2d927ef612d635237 not found: ID does not exist" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.072303 4731 scope.go:117] "RemoveContainer" containerID="87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49" Nov 29 07:44:03 crc kubenswrapper[4731]: E1129 07:44:03.072662 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49\": container with ID starting with 87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49 not found: ID does not exist" containerID="87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.072711 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49"} err="failed to get container status \"87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49\": rpc error: code = NotFound desc = could not find container \"87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49\": container with ID starting with 87e7baa5bb1cdc411c1c5815a7b0cd761dcb6cb9a95929f15758245d6492ff49 not found: ID does not exist" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.806709 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:44:03 crc kubenswrapper[4731]: E1129 07:44:03.807467 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:44:03 crc kubenswrapper[4731]: I1129 07:44:03.818927 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" path="/var/lib/kubelet/pods/db713681-3e6e-42f3-83ce-fbf516f84df5/volumes" Nov 29 07:44:17 crc kubenswrapper[4731]: I1129 07:44:17.807578 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:44:17 crc kubenswrapper[4731]: E1129 07:44:17.808381 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:44:26 crc kubenswrapper[4731]: I1129 07:44:26.286897 4731 generic.go:334] "Generic (PLEG): container finished" podID="5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" containerID="6f4fce238a9e108c85a1da08f7d8e28741f8336f3ac24ad938482e43760a16e6" exitCode=0 Nov 29 07:44:26 crc kubenswrapper[4731]: I1129 07:44:26.287006 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" event={"ID":"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed","Type":"ContainerDied","Data":"6f4fce238a9e108c85a1da08f7d8e28741f8336f3ac24ad938482e43760a16e6"} Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.799716 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.913355 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqgqs\" (UniqueName: \"kubernetes.io/projected/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-kube-api-access-xqgqs\") pod \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.913874 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-nova-metadata-neutron-config-0\") pod \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.913930 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-ssh-key\") pod \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.913969 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.914317 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-metadata-combined-ca-bundle\") pod \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.914444 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-inventory\") pod \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\" (UID: \"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed\") " Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.921290 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-kube-api-access-xqgqs" (OuterVolumeSpecName: "kube-api-access-xqgqs") pod "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" (UID: "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed"). InnerVolumeSpecName "kube-api-access-xqgqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.926470 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" (UID: "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.947909 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" (UID: "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.951013 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" (UID: "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.959867 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" (UID: "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:27 crc kubenswrapper[4731]: I1129 07:44:27.962442 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-inventory" (OuterVolumeSpecName: "inventory") pod "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" (UID: "5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.017260 4731 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.017306 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.017318 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqgqs\" (UniqueName: \"kubernetes.io/projected/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-kube-api-access-xqgqs\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.017333 4731 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.017342 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.017352 4731 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.308381 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" event={"ID":"5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed","Type":"ContainerDied","Data":"8fdb8d737900cd59586189843f4baaeef84b2d9944bd8875b3072b8372d81753"} Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.308441 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fdb8d737900cd59586189843f4baaeef84b2d9944bd8875b3072b8372d81753" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.308528 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.541387 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd"] Nov 29 07:44:28 crc kubenswrapper[4731]: E1129 07:44:28.543341 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerName="registry-server" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.543383 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerName="registry-server" Nov 29 07:44:28 crc kubenswrapper[4731]: E1129 07:44:28.543393 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerName="extract-content" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.543401 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerName="extract-content" Nov 29 07:44:28 crc kubenswrapper[4731]: E1129 07:44:28.543412 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.543423 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:44:28 crc kubenswrapper[4731]: E1129 07:44:28.543447 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerName="extract-utilities" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.543455 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerName="extract-utilities" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.543715 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.543732 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="db713681-3e6e-42f3-83ce-fbf516f84df5" containerName="registry-server" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.544672 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.547668 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.547774 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.547863 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.552898 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.553855 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.565618 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd"] Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.747438 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn7vz\" (UniqueName: \"kubernetes.io/projected/d2581ba6-0d37-40f0-b458-e9e1d1071485-kube-api-access-dn7vz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.747513 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.747582 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.747652 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.747689 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.851300 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn7vz\" (UniqueName: \"kubernetes.io/projected/d2581ba6-0d37-40f0-b458-e9e1d1071485-kube-api-access-dn7vz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.851388 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.851434 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.851484 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.851527 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.856413 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.856724 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.857114 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.857990 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.872665 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn7vz\" (UniqueName: \"kubernetes.io/projected/d2581ba6-0d37-40f0-b458-e9e1d1071485-kube-api-access-dn7vz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:28 crc kubenswrapper[4731]: I1129 07:44:28.877326 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:44:29 crc kubenswrapper[4731]: I1129 07:44:29.474977 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd"] Nov 29 07:44:30 crc kubenswrapper[4731]: I1129 07:44:30.339850 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" event={"ID":"d2581ba6-0d37-40f0-b458-e9e1d1071485","Type":"ContainerStarted","Data":"2d0c96d7bae98dba7910d9660851073d186ef021a6c33127650c7b19316a11eb"} Nov 29 07:44:30 crc kubenswrapper[4731]: I1129 07:44:30.809045 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:44:30 crc kubenswrapper[4731]: E1129 07:44:30.809372 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:44:31 crc kubenswrapper[4731]: I1129 07:44:31.349423 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" event={"ID":"d2581ba6-0d37-40f0-b458-e9e1d1071485","Type":"ContainerStarted","Data":"e2c809fc1f6567df45152f5e879c9024d3a163a1043e9830edbb6aa2f5b071d1"} Nov 29 07:44:31 crc kubenswrapper[4731]: I1129 07:44:31.373517 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" podStartSLOduration=2.693995397 podStartE2EDuration="3.373493117s" podCreationTimestamp="2025-11-29 07:44:28 +0000 UTC" firstStartedPulling="2025-11-29 07:44:29.481910634 +0000 UTC m=+2308.372271737" lastFinishedPulling="2025-11-29 07:44:30.161408354 +0000 UTC m=+2309.051769457" observedRunningTime="2025-11-29 07:44:31.370699187 +0000 UTC m=+2310.261060290" watchObservedRunningTime="2025-11-29 07:44:31.373493117 +0000 UTC m=+2310.263854220" Nov 29 07:44:42 crc kubenswrapper[4731]: I1129 07:44:42.806977 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:44:42 crc kubenswrapper[4731]: E1129 07:44:42.807974 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:44:56 crc kubenswrapper[4731]: I1129 07:44:56.807738 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:44:56 crc kubenswrapper[4731]: E1129 07:44:56.808684 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.157080 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd"] Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.159521 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.162530 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.162829 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.171110 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd"] Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.296351 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-secret-volume\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.296922 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzz9\" (UniqueName: \"kubernetes.io/projected/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-kube-api-access-mzzz9\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.297079 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-config-volume\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.399077 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-secret-volume\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.399228 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzzz9\" (UniqueName: \"kubernetes.io/projected/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-kube-api-access-mzzz9\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.399301 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-config-volume\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.400740 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-config-volume\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.410468 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-secret-volume\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.428291 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzzz9\" (UniqueName: \"kubernetes.io/projected/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-kube-api-access-mzzz9\") pod \"collect-profiles-29406705-rj5nd\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:00 crc kubenswrapper[4731]: I1129 07:45:00.493167 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:01 crc kubenswrapper[4731]: I1129 07:45:01.003265 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd"] Nov 29 07:45:01 crc kubenswrapper[4731]: I1129 07:45:01.676749 4731 generic.go:334] "Generic (PLEG): container finished" podID="50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4" containerID="b565b5f8fdce79e0a2387ef15bef372f8b4fe44a925e84ecad69a31802add288" exitCode=0 Nov 29 07:45:01 crc kubenswrapper[4731]: I1129 07:45:01.676794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" event={"ID":"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4","Type":"ContainerDied","Data":"b565b5f8fdce79e0a2387ef15bef372f8b4fe44a925e84ecad69a31802add288"} Nov 29 07:45:01 crc kubenswrapper[4731]: I1129 07:45:01.677181 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" event={"ID":"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4","Type":"ContainerStarted","Data":"0559d4461469f61a644fa97818c148d97e5b3f8f4029a20905d44011eb412627"} Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.049758 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.073553 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-config-volume\") pod \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.073679 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzzz9\" (UniqueName: \"kubernetes.io/projected/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-kube-api-access-mzzz9\") pod \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.074024 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-secret-volume\") pod \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\" (UID: \"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4\") " Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.074224 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-config-volume" (OuterVolumeSpecName: "config-volume") pod "50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4" (UID: "50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.074734 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.084437 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-kube-api-access-mzzz9" (OuterVolumeSpecName: "kube-api-access-mzzz9") pod "50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4" (UID: "50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4"). InnerVolumeSpecName "kube-api-access-mzzz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.085615 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4" (UID: "50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.178599 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.178646 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzzz9\" (UniqueName: \"kubernetes.io/projected/50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4-kube-api-access-mzzz9\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.783180 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" event={"ID":"50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4","Type":"ContainerDied","Data":"0559d4461469f61a644fa97818c148d97e5b3f8f4029a20905d44011eb412627"} Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.783814 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0559d4461469f61a644fa97818c148d97e5b3f8f4029a20905d44011eb412627" Nov 29 07:45:03 crc kubenswrapper[4731]: I1129 07:45:03.783917 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-rj5nd" Nov 29 07:45:04 crc kubenswrapper[4731]: I1129 07:45:04.141329 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s"] Nov 29 07:45:04 crc kubenswrapper[4731]: I1129 07:45:04.153409 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-2pc6s"] Nov 29 07:45:05 crc kubenswrapper[4731]: I1129 07:45:05.819292 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47b7935-c3e7-4f98-b361-87ee3b481c3d" path="/var/lib/kubelet/pods/c47b7935-c3e7-4f98-b361-87ee3b481c3d/volumes" Nov 29 07:45:09 crc kubenswrapper[4731]: I1129 07:45:09.808639 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:45:09 crc kubenswrapper[4731]: E1129 07:45:09.810858 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:45:20 crc kubenswrapper[4731]: I1129 07:45:20.900608 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxmwd"] Nov 29 07:45:20 crc kubenswrapper[4731]: E1129 07:45:20.902065 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4" containerName="collect-profiles" Nov 29 07:45:20 crc kubenswrapper[4731]: I1129 07:45:20.902085 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4" containerName="collect-profiles" Nov 29 07:45:20 crc kubenswrapper[4731]: I1129 07:45:20.902436 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d0e9ad-30dc-4c24-84d6-4d58cebd5ee4" containerName="collect-profiles" Nov 29 07:45:20 crc kubenswrapper[4731]: I1129 07:45:20.904342 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:20 crc kubenswrapper[4731]: I1129 07:45:20.921888 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxmwd"] Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.027294 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ndt\" (UniqueName: \"kubernetes.io/projected/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-kube-api-access-x5ndt\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.027413 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-catalog-content\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.027586 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-utilities\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.130041 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-utilities\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.130190 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5ndt\" (UniqueName: \"kubernetes.io/projected/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-kube-api-access-x5ndt\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.130764 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-catalog-content\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.130938 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-utilities\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.131531 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-catalog-content\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.155916 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5ndt\" (UniqueName: \"kubernetes.io/projected/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-kube-api-access-x5ndt\") pod \"community-operators-wxmwd\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.241974 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.324698 4731 scope.go:117] "RemoveContainer" containerID="a4af8ae7a2f8e44ed74f30897484aae9bfb6907076b4bebcc5abca4e110996ce" Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.828536 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxmwd"] Nov 29 07:45:21 crc kubenswrapper[4731]: I1129 07:45:21.969712 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxmwd" event={"ID":"866d1cf2-4beb-4856-9c64-4f90bd7e0b74","Type":"ContainerStarted","Data":"938e042a23d9f63c7458eeb8b73387ee4121214d61d35a06c80ec5e1f9c96dee"} Nov 29 07:45:22 crc kubenswrapper[4731]: I1129 07:45:22.983734 4731 generic.go:334] "Generic (PLEG): container finished" podID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerID="e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f" exitCode=0 Nov 29 07:45:22 crc kubenswrapper[4731]: I1129 07:45:22.983830 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxmwd" event={"ID":"866d1cf2-4beb-4856-9c64-4f90bd7e0b74","Type":"ContainerDied","Data":"e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f"} Nov 29 07:45:23 crc kubenswrapper[4731]: I1129 07:45:23.813077 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:45:23 crc kubenswrapper[4731]: E1129 07:45:23.813816 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:45:25 crc kubenswrapper[4731]: I1129 07:45:25.007620 4731 generic.go:334] "Generic (PLEG): container finished" podID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerID="9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7" exitCode=0 Nov 29 07:45:25 crc kubenswrapper[4731]: I1129 07:45:25.007723 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxmwd" event={"ID":"866d1cf2-4beb-4856-9c64-4f90bd7e0b74","Type":"ContainerDied","Data":"9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7"} Nov 29 07:45:26 crc kubenswrapper[4731]: I1129 07:45:26.022239 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxmwd" event={"ID":"866d1cf2-4beb-4856-9c64-4f90bd7e0b74","Type":"ContainerStarted","Data":"860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e"} Nov 29 07:45:26 crc kubenswrapper[4731]: I1129 07:45:26.056209 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxmwd" podStartSLOduration=3.553238328 podStartE2EDuration="6.05618244s" podCreationTimestamp="2025-11-29 07:45:20 +0000 UTC" firstStartedPulling="2025-11-29 07:45:22.987890116 +0000 UTC m=+2361.878251219" lastFinishedPulling="2025-11-29 07:45:25.490834228 +0000 UTC m=+2364.381195331" observedRunningTime="2025-11-29 07:45:26.052304139 +0000 UTC m=+2364.942665262" watchObservedRunningTime="2025-11-29 07:45:26.05618244 +0000 UTC m=+2364.946543543" Nov 29 07:45:31 crc kubenswrapper[4731]: I1129 07:45:31.242191 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:31 crc kubenswrapper[4731]: I1129 07:45:31.242931 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:31 crc kubenswrapper[4731]: I1129 07:45:31.318965 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:32 crc kubenswrapper[4731]: I1129 07:45:32.143671 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:33 crc kubenswrapper[4731]: I1129 07:45:33.906381 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-48rq6"] Nov 29 07:45:33 crc kubenswrapper[4731]: I1129 07:45:33.909729 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:33 crc kubenswrapper[4731]: I1129 07:45:33.935235 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48rq6"] Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.062832 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-utilities\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.062918 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-catalog-content\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.062998 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmgqk\" (UniqueName: \"kubernetes.io/projected/a0bcdf77-2d22-473c-823e-e16329f3f326-kube-api-access-hmgqk\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.165137 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-utilities\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.165198 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-catalog-content\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.165239 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmgqk\" (UniqueName: \"kubernetes.io/projected/a0bcdf77-2d22-473c-823e-e16329f3f326-kube-api-access-hmgqk\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.166029 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-utilities\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.166282 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-catalog-content\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.197549 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmgqk\" (UniqueName: \"kubernetes.io/projected/a0bcdf77-2d22-473c-823e-e16329f3f326-kube-api-access-hmgqk\") pod \"redhat-marketplace-48rq6\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.242179 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.737679 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48rq6"] Nov 29 07:45:34 crc kubenswrapper[4731]: W1129 07:45:34.745577 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0bcdf77_2d22_473c_823e_e16329f3f326.slice/crio-1b6db822a5996fbf08cfa1e295718ad97a874759e6e21d4076f360b93c7a8fd9 WatchSource:0}: Error finding container 1b6db822a5996fbf08cfa1e295718ad97a874759e6e21d4076f360b93c7a8fd9: Status 404 returned error can't find the container with id 1b6db822a5996fbf08cfa1e295718ad97a874759e6e21d4076f360b93c7a8fd9 Nov 29 07:45:34 crc kubenswrapper[4731]: I1129 07:45:34.807857 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:45:34 crc kubenswrapper[4731]: E1129 07:45:34.808593 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:45:35 crc kubenswrapper[4731]: I1129 07:45:35.116811 4731 generic.go:334] "Generic (PLEG): container finished" podID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerID="b37bb08a1b5e55f098aea20fc3e2617f2937a02c92f167480e162d95f6ed60fc" exitCode=0 Nov 29 07:45:35 crc kubenswrapper[4731]: I1129 07:45:35.116906 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48rq6" event={"ID":"a0bcdf77-2d22-473c-823e-e16329f3f326","Type":"ContainerDied","Data":"b37bb08a1b5e55f098aea20fc3e2617f2937a02c92f167480e162d95f6ed60fc"} Nov 29 07:45:35 crc kubenswrapper[4731]: I1129 07:45:35.116971 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48rq6" event={"ID":"a0bcdf77-2d22-473c-823e-e16329f3f326","Type":"ContainerStarted","Data":"1b6db822a5996fbf08cfa1e295718ad97a874759e6e21d4076f360b93c7a8fd9"} Nov 29 07:45:36 crc kubenswrapper[4731]: I1129 07:45:36.296876 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxmwd"] Nov 29 07:45:36 crc kubenswrapper[4731]: I1129 07:45:36.297403 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wxmwd" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerName="registry-server" containerID="cri-o://860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e" gracePeriod=2 Nov 29 07:45:36 crc kubenswrapper[4731]: I1129 07:45:36.809826 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:36 crc kubenswrapper[4731]: I1129 07:45:36.933395 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5ndt\" (UniqueName: \"kubernetes.io/projected/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-kube-api-access-x5ndt\") pod \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " Nov 29 07:45:36 crc kubenswrapper[4731]: I1129 07:45:36.933543 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-utilities\") pod \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " Nov 29 07:45:36 crc kubenswrapper[4731]: I1129 07:45:36.933724 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-catalog-content\") pod \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\" (UID: \"866d1cf2-4beb-4856-9c64-4f90bd7e0b74\") " Nov 29 07:45:36 crc kubenswrapper[4731]: I1129 07:45:36.934507 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-utilities" (OuterVolumeSpecName: "utilities") pod "866d1cf2-4beb-4856-9c64-4f90bd7e0b74" (UID: "866d1cf2-4beb-4856-9c64-4f90bd7e0b74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4731]: I1129 07:45:36.941424 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-kube-api-access-x5ndt" (OuterVolumeSpecName: "kube-api-access-x5ndt") pod "866d1cf2-4beb-4856-9c64-4f90bd7e0b74" (UID: "866d1cf2-4beb-4856-9c64-4f90bd7e0b74"). InnerVolumeSpecName "kube-api-access-x5ndt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.002111 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "866d1cf2-4beb-4856-9c64-4f90bd7e0b74" (UID: "866d1cf2-4beb-4856-9c64-4f90bd7e0b74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.037206 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5ndt\" (UniqueName: \"kubernetes.io/projected/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-kube-api-access-x5ndt\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.037250 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.037261 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866d1cf2-4beb-4856-9c64-4f90bd7e0b74-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.147630 4731 generic.go:334] "Generic (PLEG): container finished" podID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerID="c5760a52b09e44fed689da7ca2b38f587d72264e051037efe4193cf89b03d47b" exitCode=0 Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.147734 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48rq6" event={"ID":"a0bcdf77-2d22-473c-823e-e16329f3f326","Type":"ContainerDied","Data":"c5760a52b09e44fed689da7ca2b38f587d72264e051037efe4193cf89b03d47b"} Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.152046 4731 generic.go:334] "Generic (PLEG): container finished" podID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerID="860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e" exitCode=0 Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.152151 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxmwd" event={"ID":"866d1cf2-4beb-4856-9c64-4f90bd7e0b74","Type":"ContainerDied","Data":"860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e"} Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.152253 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxmwd" event={"ID":"866d1cf2-4beb-4856-9c64-4f90bd7e0b74","Type":"ContainerDied","Data":"938e042a23d9f63c7458eeb8b73387ee4121214d61d35a06c80ec5e1f9c96dee"} Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.152289 4731 scope.go:117] "RemoveContainer" containerID="860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.152433 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxmwd" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.195875 4731 scope.go:117] "RemoveContainer" containerID="9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.211067 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxmwd"] Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.219055 4731 scope.go:117] "RemoveContainer" containerID="e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.222334 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wxmwd"] Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.278857 4731 scope.go:117] "RemoveContainer" containerID="860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e" Nov 29 07:45:37 crc kubenswrapper[4731]: E1129 07:45:37.279470 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e\": container with ID starting with 860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e not found: ID does not exist" containerID="860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.279519 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e"} err="failed to get container status \"860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e\": rpc error: code = NotFound desc = could not find container \"860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e\": container with ID starting with 860e9c1000fb2a4ed4c67f55157b5a5521f88058f7f1e4108e62b2ca9bee7d9e not found: ID does not exist" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.279549 4731 scope.go:117] "RemoveContainer" containerID="9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7" Nov 29 07:45:37 crc kubenswrapper[4731]: E1129 07:45:37.280051 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7\": container with ID starting with 9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7 not found: ID does not exist" containerID="9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.280113 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7"} err="failed to get container status \"9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7\": rpc error: code = NotFound desc = could not find container \"9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7\": container with ID starting with 9590d864636c61f07d45f0868813ae82304d987e3a58df29533031cb371490d7 not found: ID does not exist" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.280158 4731 scope.go:117] "RemoveContainer" containerID="e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f" Nov 29 07:45:37 crc kubenswrapper[4731]: E1129 07:45:37.280556 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f\": container with ID starting with e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f not found: ID does not exist" containerID="e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.280611 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f"} err="failed to get container status \"e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f\": rpc error: code = NotFound desc = could not find container \"e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f\": container with ID starting with e38e8a5918fc86cdd50b1faf5dc757f68700ab67833e80a34a9ca43d41ee0b4f not found: ID does not exist" Nov 29 07:45:37 crc kubenswrapper[4731]: I1129 07:45:37.817470 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" path="/var/lib/kubelet/pods/866d1cf2-4beb-4856-9c64-4f90bd7e0b74/volumes" Nov 29 07:45:38 crc kubenswrapper[4731]: I1129 07:45:38.168865 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48rq6" event={"ID":"a0bcdf77-2d22-473c-823e-e16329f3f326","Type":"ContainerStarted","Data":"5e7d4c3e6a6d07bb837309b9136669ed37ea51ac6619d762c9a0fe9fae8dcc39"} Nov 29 07:45:38 crc kubenswrapper[4731]: I1129 07:45:38.212510 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-48rq6" podStartSLOduration=2.751353926 podStartE2EDuration="5.212474402s" podCreationTimestamp="2025-11-29 07:45:33 +0000 UTC" firstStartedPulling="2025-11-29 07:45:35.118626005 +0000 UTC m=+2374.008987108" lastFinishedPulling="2025-11-29 07:45:37.579746481 +0000 UTC m=+2376.470107584" observedRunningTime="2025-11-29 07:45:38.189956858 +0000 UTC m=+2377.080317961" watchObservedRunningTime="2025-11-29 07:45:38.212474402 +0000 UTC m=+2377.102835515" Nov 29 07:45:44 crc kubenswrapper[4731]: I1129 07:45:44.242325 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:44 crc kubenswrapper[4731]: I1129 07:45:44.242905 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:44 crc kubenswrapper[4731]: I1129 07:45:44.293513 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:45 crc kubenswrapper[4731]: I1129 07:45:45.284164 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:45 crc kubenswrapper[4731]: I1129 07:45:45.344353 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48rq6"] Nov 29 07:45:46 crc kubenswrapper[4731]: I1129 07:45:46.807187 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:45:46 crc kubenswrapper[4731]: E1129 07:45:46.807771 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:45:47 crc kubenswrapper[4731]: I1129 07:45:47.254307 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-48rq6" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerName="registry-server" containerID="cri-o://5e7d4c3e6a6d07bb837309b9136669ed37ea51ac6619d762c9a0fe9fae8dcc39" gracePeriod=2 Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.270770 4731 generic.go:334] "Generic (PLEG): container finished" podID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerID="5e7d4c3e6a6d07bb837309b9136669ed37ea51ac6619d762c9a0fe9fae8dcc39" exitCode=0 Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.270855 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48rq6" event={"ID":"a0bcdf77-2d22-473c-823e-e16329f3f326","Type":"ContainerDied","Data":"5e7d4c3e6a6d07bb837309b9136669ed37ea51ac6619d762c9a0fe9fae8dcc39"} Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.271227 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48rq6" event={"ID":"a0bcdf77-2d22-473c-823e-e16329f3f326","Type":"ContainerDied","Data":"1b6db822a5996fbf08cfa1e295718ad97a874759e6e21d4076f360b93c7a8fd9"} Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.271248 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b6db822a5996fbf08cfa1e295718ad97a874759e6e21d4076f360b93c7a8fd9" Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.286916 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.392561 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmgqk\" (UniqueName: \"kubernetes.io/projected/a0bcdf77-2d22-473c-823e-e16329f3f326-kube-api-access-hmgqk\") pod \"a0bcdf77-2d22-473c-823e-e16329f3f326\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.392857 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-catalog-content\") pod \"a0bcdf77-2d22-473c-823e-e16329f3f326\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.393006 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-utilities\") pod \"a0bcdf77-2d22-473c-823e-e16329f3f326\" (UID: \"a0bcdf77-2d22-473c-823e-e16329f3f326\") " Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.394881 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-utilities" (OuterVolumeSpecName: "utilities") pod "a0bcdf77-2d22-473c-823e-e16329f3f326" (UID: "a0bcdf77-2d22-473c-823e-e16329f3f326"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.405114 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bcdf77-2d22-473c-823e-e16329f3f326-kube-api-access-hmgqk" (OuterVolumeSpecName: "kube-api-access-hmgqk") pod "a0bcdf77-2d22-473c-823e-e16329f3f326" (UID: "a0bcdf77-2d22-473c-823e-e16329f3f326"). InnerVolumeSpecName "kube-api-access-hmgqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.418959 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0bcdf77-2d22-473c-823e-e16329f3f326" (UID: "a0bcdf77-2d22-473c-823e-e16329f3f326"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.496321 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmgqk\" (UniqueName: \"kubernetes.io/projected/a0bcdf77-2d22-473c-823e-e16329f3f326-kube-api-access-hmgqk\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.496362 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:48 crc kubenswrapper[4731]: I1129 07:45:48.496373 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bcdf77-2d22-473c-823e-e16329f3f326-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:49 crc kubenswrapper[4731]: I1129 07:45:49.294067 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48rq6" Nov 29 07:45:49 crc kubenswrapper[4731]: I1129 07:45:49.337184 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48rq6"] Nov 29 07:45:49 crc kubenswrapper[4731]: I1129 07:45:49.345399 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-48rq6"] Nov 29 07:45:49 crc kubenswrapper[4731]: I1129 07:45:49.830868 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" path="/var/lib/kubelet/pods/a0bcdf77-2d22-473c-823e-e16329f3f326/volumes" Nov 29 07:45:58 crc kubenswrapper[4731]: I1129 07:45:58.808231 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:45:58 crc kubenswrapper[4731]: E1129 07:45:58.808874 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:46:13 crc kubenswrapper[4731]: I1129 07:46:13.808077 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:46:13 crc kubenswrapper[4731]: E1129 07:46:13.809056 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:46:28 crc kubenswrapper[4731]: I1129 07:46:28.807513 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:46:28 crc kubenswrapper[4731]: E1129 07:46:28.809467 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:46:40 crc kubenswrapper[4731]: I1129 07:46:40.806625 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:46:40 crc kubenswrapper[4731]: E1129 07:46:40.807311 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:46:52 crc kubenswrapper[4731]: I1129 07:46:52.807372 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:46:52 crc kubenswrapper[4731]: E1129 07:46:52.808586 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:47:04 crc kubenswrapper[4731]: I1129 07:47:04.807969 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:47:04 crc kubenswrapper[4731]: E1129 07:47:04.808989 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:47:15 crc kubenswrapper[4731]: I1129 07:47:15.806836 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:47:15 crc kubenswrapper[4731]: E1129 07:47:15.807894 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:47:30 crc kubenswrapper[4731]: I1129 07:47:30.807273 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:47:30 crc kubenswrapper[4731]: E1129 07:47:30.808597 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:47:42 crc kubenswrapper[4731]: I1129 07:47:42.807628 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:47:42 crc kubenswrapper[4731]: E1129 07:47:42.808377 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:47:57 crc kubenswrapper[4731]: I1129 07:47:57.808021 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:47:57 crc kubenswrapper[4731]: E1129 07:47:57.808952 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:48:11 crc kubenswrapper[4731]: I1129 07:48:11.807862 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:48:12 crc kubenswrapper[4731]: I1129 07:48:12.749457 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"3c519b59b3163c99f3ed432a57f0193c05d933b6a8bb33a617a562a4fda90905"} Nov 29 07:48:53 crc kubenswrapper[4731]: I1129 07:48:53.203402 4731 generic.go:334] "Generic (PLEG): container finished" podID="d2581ba6-0d37-40f0-b458-e9e1d1071485" containerID="e2c809fc1f6567df45152f5e879c9024d3a163a1043e9830edbb6aa2f5b071d1" exitCode=0 Nov 29 07:48:53 crc kubenswrapper[4731]: I1129 07:48:53.203500 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" event={"ID":"d2581ba6-0d37-40f0-b458-e9e1d1071485","Type":"ContainerDied","Data":"e2c809fc1f6567df45152f5e879c9024d3a163a1043e9830edbb6aa2f5b071d1"} Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.649491 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.695124 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-combined-ca-bundle\") pod \"d2581ba6-0d37-40f0-b458-e9e1d1071485\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.695187 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-inventory\") pod \"d2581ba6-0d37-40f0-b458-e9e1d1071485\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.695312 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-ssh-key\") pod \"d2581ba6-0d37-40f0-b458-e9e1d1071485\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.695415 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-secret-0\") pod \"d2581ba6-0d37-40f0-b458-e9e1d1071485\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.695469 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn7vz\" (UniqueName: \"kubernetes.io/projected/d2581ba6-0d37-40f0-b458-e9e1d1071485-kube-api-access-dn7vz\") pod \"d2581ba6-0d37-40f0-b458-e9e1d1071485\" (UID: \"d2581ba6-0d37-40f0-b458-e9e1d1071485\") " Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.702109 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d2581ba6-0d37-40f0-b458-e9e1d1071485" (UID: "d2581ba6-0d37-40f0-b458-e9e1d1071485"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.707000 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2581ba6-0d37-40f0-b458-e9e1d1071485-kube-api-access-dn7vz" (OuterVolumeSpecName: "kube-api-access-dn7vz") pod "d2581ba6-0d37-40f0-b458-e9e1d1071485" (UID: "d2581ba6-0d37-40f0-b458-e9e1d1071485"). InnerVolumeSpecName "kube-api-access-dn7vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.727774 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d2581ba6-0d37-40f0-b458-e9e1d1071485" (UID: "d2581ba6-0d37-40f0-b458-e9e1d1071485"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.730815 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-inventory" (OuterVolumeSpecName: "inventory") pod "d2581ba6-0d37-40f0-b458-e9e1d1071485" (UID: "d2581ba6-0d37-40f0-b458-e9e1d1071485"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.732761 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d2581ba6-0d37-40f0-b458-e9e1d1071485" (UID: "d2581ba6-0d37-40f0-b458-e9e1d1071485"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.798450 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.798491 4731 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.798551 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn7vz\" (UniqueName: \"kubernetes.io/projected/d2581ba6-0d37-40f0-b458-e9e1d1071485-kube-api-access-dn7vz\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.798578 4731 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:54 crc kubenswrapper[4731]: I1129 07:48:54.798593 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2581ba6-0d37-40f0-b458-e9e1d1071485-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.224050 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" event={"ID":"d2581ba6-0d37-40f0-b458-e9e1d1071485","Type":"ContainerDied","Data":"2d0c96d7bae98dba7910d9660851073d186ef021a6c33127650c7b19316a11eb"} Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.224419 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d0c96d7bae98dba7910d9660851073d186ef021a6c33127650c7b19316a11eb" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.224132 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.336259 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5"] Nov 29 07:48:55 crc kubenswrapper[4731]: E1129 07:48:55.337065 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerName="registry-server" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.337195 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerName="registry-server" Nov 29 07:48:55 crc kubenswrapper[4731]: E1129 07:48:55.337320 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerName="extract-content" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.337398 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerName="extract-content" Nov 29 07:48:55 crc kubenswrapper[4731]: E1129 07:48:55.337477 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2581ba6-0d37-40f0-b458-e9e1d1071485" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.337593 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2581ba6-0d37-40f0-b458-e9e1d1071485" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:48:55 crc kubenswrapper[4731]: E1129 07:48:55.337686 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerName="extract-content" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.337750 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerName="extract-content" Nov 29 07:48:55 crc kubenswrapper[4731]: E1129 07:48:55.337854 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerName="registry-server" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.337939 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerName="registry-server" Nov 29 07:48:55 crc kubenswrapper[4731]: E1129 07:48:55.338025 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerName="extract-utilities" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.338102 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerName="extract-utilities" Nov 29 07:48:55 crc kubenswrapper[4731]: E1129 07:48:55.338180 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerName="extract-utilities" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.338260 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerName="extract-utilities" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.338636 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0bcdf77-2d22-473c-823e-e16329f3f326" containerName="registry-server" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.338726 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="866d1cf2-4beb-4856-9c64-4f90bd7e0b74" containerName="registry-server" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.338803 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2581ba6-0d37-40f0-b458-e9e1d1071485" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.339844 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.344132 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.344425 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.344430 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.344796 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.344797 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.344797 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.345241 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.350026 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5"] Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.410206 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z876r\" (UniqueName: \"kubernetes.io/projected/6cd13760-b9b5-4fa6-ab05-773d91d97346-kube-api-access-z876r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.410269 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.410297 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.410520 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.410725 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.410826 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.411054 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.411203 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.411330 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.513976 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z876r\" (UniqueName: \"kubernetes.io/projected/6cd13760-b9b5-4fa6-ab05-773d91d97346-kube-api-access-z876r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.514426 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.514557 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.514753 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.514883 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.515019 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.515169 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.515413 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.515597 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.516045 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.520076 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.520089 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.521405 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.521779 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.521829 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.522068 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.523295 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.534966 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z876r\" (UniqueName: \"kubernetes.io/projected/6cd13760-b9b5-4fa6-ab05-773d91d97346-kube-api-access-z876r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ttnn5\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:55 crc kubenswrapper[4731]: I1129 07:48:55.663858 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:48:56 crc kubenswrapper[4731]: I1129 07:48:56.315754 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5"] Nov 29 07:48:56 crc kubenswrapper[4731]: I1129 07:48:56.327769 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:48:57 crc kubenswrapper[4731]: I1129 07:48:57.246261 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" event={"ID":"6cd13760-b9b5-4fa6-ab05-773d91d97346","Type":"ContainerStarted","Data":"e44868023716da7bdc8878055227d95eda5a81c519daec92252a47ce923184ca"} Nov 29 07:48:57 crc kubenswrapper[4731]: I1129 07:48:57.246852 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" event={"ID":"6cd13760-b9b5-4fa6-ab05-773d91d97346","Type":"ContainerStarted","Data":"a785ef91bf4639c2c9279caeda8db26136f9a7625f8c266c909ef47b21204a8c"} Nov 29 07:48:57 crc kubenswrapper[4731]: I1129 07:48:57.268352 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" podStartSLOduration=1.832904411 podStartE2EDuration="2.268327876s" podCreationTimestamp="2025-11-29 07:48:55 +0000 UTC" firstStartedPulling="2025-11-29 07:48:56.32746386 +0000 UTC m=+2575.217824963" lastFinishedPulling="2025-11-29 07:48:56.762887325 +0000 UTC m=+2575.653248428" observedRunningTime="2025-11-29 07:48:57.267226425 +0000 UTC m=+2576.157587528" watchObservedRunningTime="2025-11-29 07:48:57.268327876 +0000 UTC m=+2576.158688979" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.517956 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jz2lw"] Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.521928 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.535228 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jz2lw"] Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.620749 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-catalog-content\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.620809 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-utilities\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.620874 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w52jp\" (UniqueName: \"kubernetes.io/projected/34fc62d8-12cf-4c44-8b28-3f2db089448d-kube-api-access-w52jp\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.722896 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-catalog-content\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.722971 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-utilities\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.723046 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w52jp\" (UniqueName: \"kubernetes.io/projected/34fc62d8-12cf-4c44-8b28-3f2db089448d-kube-api-access-w52jp\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.723534 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-catalog-content\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.723785 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-utilities\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.763316 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w52jp\" (UniqueName: \"kubernetes.io/projected/34fc62d8-12cf-4c44-8b28-3f2db089448d-kube-api-access-w52jp\") pod \"redhat-operators-jz2lw\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:18 crc kubenswrapper[4731]: I1129 07:49:18.855904 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:19 crc kubenswrapper[4731]: I1129 07:49:19.389819 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jz2lw"] Nov 29 07:49:19 crc kubenswrapper[4731]: I1129 07:49:19.478776 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz2lw" event={"ID":"34fc62d8-12cf-4c44-8b28-3f2db089448d","Type":"ContainerStarted","Data":"5a7a1634ff0892a8fc5991c2158ed15231c056b6f7a3dc0d4945b6765458e76d"} Nov 29 07:49:20 crc kubenswrapper[4731]: I1129 07:49:20.538707 4731 generic.go:334] "Generic (PLEG): container finished" podID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerID="e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563" exitCode=0 Nov 29 07:49:20 crc kubenswrapper[4731]: I1129 07:49:20.538820 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz2lw" event={"ID":"34fc62d8-12cf-4c44-8b28-3f2db089448d","Type":"ContainerDied","Data":"e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563"} Nov 29 07:49:21 crc kubenswrapper[4731]: I1129 07:49:21.549947 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz2lw" event={"ID":"34fc62d8-12cf-4c44-8b28-3f2db089448d","Type":"ContainerStarted","Data":"9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39"} Nov 29 07:49:24 crc kubenswrapper[4731]: I1129 07:49:24.583789 4731 generic.go:334] "Generic (PLEG): container finished" podID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerID="9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39" exitCode=0 Nov 29 07:49:24 crc kubenswrapper[4731]: I1129 07:49:24.583908 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz2lw" event={"ID":"34fc62d8-12cf-4c44-8b28-3f2db089448d","Type":"ContainerDied","Data":"9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39"} Nov 29 07:49:26 crc kubenswrapper[4731]: I1129 07:49:26.609732 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz2lw" event={"ID":"34fc62d8-12cf-4c44-8b28-3f2db089448d","Type":"ContainerStarted","Data":"28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4"} Nov 29 07:49:26 crc kubenswrapper[4731]: I1129 07:49:26.633679 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jz2lw" podStartSLOduration=3.624358919 podStartE2EDuration="8.633659829s" podCreationTimestamp="2025-11-29 07:49:18 +0000 UTC" firstStartedPulling="2025-11-29 07:49:20.541779159 +0000 UTC m=+2599.432140262" lastFinishedPulling="2025-11-29 07:49:25.551080049 +0000 UTC m=+2604.441441172" observedRunningTime="2025-11-29 07:49:26.627811981 +0000 UTC m=+2605.518173084" watchObservedRunningTime="2025-11-29 07:49:26.633659829 +0000 UTC m=+2605.524020922" Nov 29 07:49:28 crc kubenswrapper[4731]: I1129 07:49:28.856014 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:28 crc kubenswrapper[4731]: I1129 07:49:28.856357 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:29 crc kubenswrapper[4731]: I1129 07:49:29.906070 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz2lw" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="registry-server" probeResult="failure" output=< Nov 29 07:49:29 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 29 07:49:29 crc kubenswrapper[4731]: > Nov 29 07:49:38 crc kubenswrapper[4731]: I1129 07:49:38.913801 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:38 crc kubenswrapper[4731]: I1129 07:49:38.985002 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:39 crc kubenswrapper[4731]: I1129 07:49:39.163317 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jz2lw"] Nov 29 07:49:40 crc kubenswrapper[4731]: I1129 07:49:40.737300 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jz2lw" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="registry-server" containerID="cri-o://28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4" gracePeriod=2 Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.213855 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.648580 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w52jp\" (UniqueName: \"kubernetes.io/projected/34fc62d8-12cf-4c44-8b28-3f2db089448d-kube-api-access-w52jp\") pod \"34fc62d8-12cf-4c44-8b28-3f2db089448d\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.648894 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-utilities\") pod \"34fc62d8-12cf-4c44-8b28-3f2db089448d\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.649029 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-catalog-content\") pod \"34fc62d8-12cf-4c44-8b28-3f2db089448d\" (UID: \"34fc62d8-12cf-4c44-8b28-3f2db089448d\") " Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.650386 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-utilities" (OuterVolumeSpecName: "utilities") pod "34fc62d8-12cf-4c44-8b28-3f2db089448d" (UID: "34fc62d8-12cf-4c44-8b28-3f2db089448d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.745148 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fc62d8-12cf-4c44-8b28-3f2db089448d-kube-api-access-w52jp" (OuterVolumeSpecName: "kube-api-access-w52jp") pod "34fc62d8-12cf-4c44-8b28-3f2db089448d" (UID: "34fc62d8-12cf-4c44-8b28-3f2db089448d"). InnerVolumeSpecName "kube-api-access-w52jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.752409 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w52jp\" (UniqueName: \"kubernetes.io/projected/34fc62d8-12cf-4c44-8b28-3f2db089448d-kube-api-access-w52jp\") on node \"crc\" DevicePath \"\"" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.752480 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.783875 4731 generic.go:334] "Generic (PLEG): container finished" podID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerID="28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4" exitCode=0 Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.783979 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz2lw" event={"ID":"34fc62d8-12cf-4c44-8b28-3f2db089448d","Type":"ContainerDied","Data":"28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4"} Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.784054 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz2lw" event={"ID":"34fc62d8-12cf-4c44-8b28-3f2db089448d","Type":"ContainerDied","Data":"5a7a1634ff0892a8fc5991c2158ed15231c056b6f7a3dc0d4945b6765458e76d"} Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.784107 4731 scope.go:117] "RemoveContainer" containerID="28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.784447 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz2lw" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.814785 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34fc62d8-12cf-4c44-8b28-3f2db089448d" (UID: "34fc62d8-12cf-4c44-8b28-3f2db089448d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.838707 4731 scope.go:117] "RemoveContainer" containerID="9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.855688 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fc62d8-12cf-4c44-8b28-3f2db089448d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.867496 4731 scope.go:117] "RemoveContainer" containerID="e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.914507 4731 scope.go:117] "RemoveContainer" containerID="28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4" Nov 29 07:49:41 crc kubenswrapper[4731]: E1129 07:49:41.915076 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4\": container with ID starting with 28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4 not found: ID does not exist" containerID="28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.915136 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4"} err="failed to get container status \"28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4\": rpc error: code = NotFound desc = could not find container \"28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4\": container with ID starting with 28e8521cf06685a628702ebdefe7f3c88e4867148e85502713fecfe19c9f47f4 not found: ID does not exist" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.915174 4731 scope.go:117] "RemoveContainer" containerID="9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39" Nov 29 07:49:41 crc kubenswrapper[4731]: E1129 07:49:41.915773 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39\": container with ID starting with 9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39 not found: ID does not exist" containerID="9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.915814 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39"} err="failed to get container status \"9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39\": rpc error: code = NotFound desc = could not find container \"9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39\": container with ID starting with 9706af5bc5a656ff50bdf9aecd2a6b0447f654e5d1eaf73931d2de12390f1a39 not found: ID does not exist" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.915847 4731 scope.go:117] "RemoveContainer" containerID="e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563" Nov 29 07:49:41 crc kubenswrapper[4731]: E1129 07:49:41.916302 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563\": container with ID starting with e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563 not found: ID does not exist" containerID="e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563" Nov 29 07:49:41 crc kubenswrapper[4731]: I1129 07:49:41.916378 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563"} err="failed to get container status \"e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563\": rpc error: code = NotFound desc = could not find container \"e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563\": container with ID starting with e4b00a2a6e4a0d819df01497c14d840b126f5c354a61917ac63135fe9b32d563 not found: ID does not exist" Nov 29 07:49:42 crc kubenswrapper[4731]: I1129 07:49:42.115739 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jz2lw"] Nov 29 07:49:42 crc kubenswrapper[4731]: I1129 07:49:42.124445 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jz2lw"] Nov 29 07:49:43 crc kubenswrapper[4731]: I1129 07:49:43.822150 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" path="/var/lib/kubelet/pods/34fc62d8-12cf-4c44-8b28-3f2db089448d/volumes" Nov 29 07:50:33 crc kubenswrapper[4731]: I1129 07:50:33.003722 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:50:33 crc kubenswrapper[4731]: I1129 07:50:33.004344 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:51:03 crc kubenswrapper[4731]: I1129 07:51:03.002858 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:51:03 crc kubenswrapper[4731]: I1129 07:51:03.003534 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:51:33 crc kubenswrapper[4731]: I1129 07:51:33.002397 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:51:33 crc kubenswrapper[4731]: I1129 07:51:33.003210 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:51:33 crc kubenswrapper[4731]: I1129 07:51:33.003276 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:51:33 crc kubenswrapper[4731]: I1129 07:51:33.004464 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c519b59b3163c99f3ed432a57f0193c05d933b6a8bb33a617a562a4fda90905"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:51:33 crc kubenswrapper[4731]: I1129 07:51:33.004531 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://3c519b59b3163c99f3ed432a57f0193c05d933b6a8bb33a617a562a4fda90905" gracePeriod=600 Nov 29 07:51:34 crc kubenswrapper[4731]: I1129 07:51:34.008984 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="3c519b59b3163c99f3ed432a57f0193c05d933b6a8bb33a617a562a4fda90905" exitCode=0 Nov 29 07:51:34 crc kubenswrapper[4731]: I1129 07:51:34.009052 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"3c519b59b3163c99f3ed432a57f0193c05d933b6a8bb33a617a562a4fda90905"} Nov 29 07:51:34 crc kubenswrapper[4731]: I1129 07:51:34.011517 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f"} Nov 29 07:51:34 crc kubenswrapper[4731]: I1129 07:51:34.011648 4731 scope.go:117] "RemoveContainer" containerID="c07780461e13a10787c87bd71e6695815f5734928eb2b4fd862416cf85f74e8b" Nov 29 07:51:48 crc kubenswrapper[4731]: I1129 07:51:48.179879 4731 generic.go:334] "Generic (PLEG): container finished" podID="6cd13760-b9b5-4fa6-ab05-773d91d97346" containerID="e44868023716da7bdc8878055227d95eda5a81c519daec92252a47ce923184ca" exitCode=0 Nov 29 07:51:48 crc kubenswrapper[4731]: I1129 07:51:48.179992 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" event={"ID":"6cd13760-b9b5-4fa6-ab05-773d91d97346","Type":"ContainerDied","Data":"e44868023716da7bdc8878055227d95eda5a81c519daec92252a47ce923184ca"} Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.641246 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.740804 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-extra-config-0\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.740984 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-inventory\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.741036 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-0\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.741134 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-0\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.741163 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-1\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.741187 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-combined-ca-bundle\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.741253 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-ssh-key\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.741294 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z876r\" (UniqueName: \"kubernetes.io/projected/6cd13760-b9b5-4fa6-ab05-773d91d97346-kube-api-access-z876r\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.741346 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-1\") pod \"6cd13760-b9b5-4fa6-ab05-773d91d97346\" (UID: \"6cd13760-b9b5-4fa6-ab05-773d91d97346\") " Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.747516 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.748966 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd13760-b9b5-4fa6-ab05-773d91d97346-kube-api-access-z876r" (OuterVolumeSpecName: "kube-api-access-z876r") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "kube-api-access-z876r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.772138 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.773909 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.777835 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-inventory" (OuterVolumeSpecName: "inventory") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.778477 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.781830 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.782493 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.787878 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6cd13760-b9b5-4fa6-ab05-773d91d97346" (UID: "6cd13760-b9b5-4fa6-ab05-773d91d97346"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843273 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843315 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z876r\" (UniqueName: \"kubernetes.io/projected/6cd13760-b9b5-4fa6-ab05-773d91d97346-kube-api-access-z876r\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843328 4731 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843339 4731 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843349 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843360 4731 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843369 4731 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843377 4731 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:49 crc kubenswrapper[4731]: I1129 07:51:49.843386 4731 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd13760-b9b5-4fa6-ab05-773d91d97346-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.200763 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" event={"ID":"6cd13760-b9b5-4fa6-ab05-773d91d97346","Type":"ContainerDied","Data":"a785ef91bf4639c2c9279caeda8db26136f9a7625f8c266c909ef47b21204a8c"} Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.200812 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a785ef91bf4639c2c9279caeda8db26136f9a7625f8c266c909ef47b21204a8c" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.200853 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ttnn5" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.314951 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2"] Nov 29 07:51:50 crc kubenswrapper[4731]: E1129 07:51:50.315396 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="extract-content" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.315415 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="extract-content" Nov 29 07:51:50 crc kubenswrapper[4731]: E1129 07:51:50.315442 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="extract-utilities" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.315450 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="extract-utilities" Nov 29 07:51:50 crc kubenswrapper[4731]: E1129 07:51:50.315465 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd13760-b9b5-4fa6-ab05-773d91d97346" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.315472 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd13760-b9b5-4fa6-ab05-773d91d97346" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 07:51:50 crc kubenswrapper[4731]: E1129 07:51:50.315488 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="registry-server" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.315493 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="registry-server" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.315717 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd13760-b9b5-4fa6-ab05-773d91d97346" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.315738 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fc62d8-12cf-4c44-8b28-3f2db089448d" containerName="registry-server" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.316620 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.319522 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nvl6q" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.319616 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.321019 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.321144 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.321854 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.326510 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2"] Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.456995 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.457103 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.457232 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bp6\" (UniqueName: \"kubernetes.io/projected/7e587ad4-40e6-4719-a23b-ff5035f40152-kube-api-access-f7bp6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.457338 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.457422 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.457459 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.457494 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.558921 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.558970 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.559002 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.559034 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.559108 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.559167 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7bp6\" (UniqueName: \"kubernetes.io/projected/7e587ad4-40e6-4719-a23b-ff5035f40152-kube-api-access-f7bp6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.559249 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.575022 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.575039 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.575484 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.575818 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.576879 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.586985 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.588208 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7bp6\" (UniqueName: \"kubernetes.io/projected/7e587ad4-40e6-4719-a23b-ff5035f40152-kube-api-access-f7bp6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:50 crc kubenswrapper[4731]: I1129 07:51:50.680620 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:51:51 crc kubenswrapper[4731]: I1129 07:51:51.233020 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2"] Nov 29 07:51:51 crc kubenswrapper[4731]: W1129 07:51:51.235730 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e587ad4_40e6_4719_a23b_ff5035f40152.slice/crio-64b33458b048e4bc2d3f67bce0fd61e8dcd46daa8f1700959f47a349bc7da4cb WatchSource:0}: Error finding container 64b33458b048e4bc2d3f67bce0fd61e8dcd46daa8f1700959f47a349bc7da4cb: Status 404 returned error can't find the container with id 64b33458b048e4bc2d3f67bce0fd61e8dcd46daa8f1700959f47a349bc7da4cb Nov 29 07:51:52 crc kubenswrapper[4731]: I1129 07:51:52.222676 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" event={"ID":"7e587ad4-40e6-4719-a23b-ff5035f40152","Type":"ContainerStarted","Data":"7967ef3c11ac89bd76140754b42c994b5b7ea3aa62c640cb19e6b1dba21fba02"} Nov 29 07:51:52 crc kubenswrapper[4731]: I1129 07:51:52.223001 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" event={"ID":"7e587ad4-40e6-4719-a23b-ff5035f40152","Type":"ContainerStarted","Data":"64b33458b048e4bc2d3f67bce0fd61e8dcd46daa8f1700959f47a349bc7da4cb"} Nov 29 07:51:52 crc kubenswrapper[4731]: I1129 07:51:52.250944 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" podStartSLOduration=1.830756638 podStartE2EDuration="2.250916523s" podCreationTimestamp="2025-11-29 07:51:50 +0000 UTC" firstStartedPulling="2025-11-29 07:51:51.238495515 +0000 UTC m=+2750.128856618" lastFinishedPulling="2025-11-29 07:51:51.6586554 +0000 UTC m=+2750.549016503" observedRunningTime="2025-11-29 07:51:52.246506456 +0000 UTC m=+2751.136867569" watchObservedRunningTime="2025-11-29 07:51:52.250916523 +0000 UTC m=+2751.141277626" Nov 29 07:52:21 crc kubenswrapper[4731]: I1129 07:52:21.619866 4731 scope.go:117] "RemoveContainer" containerID="c5760a52b09e44fed689da7ca2b38f587d72264e051037efe4193cf89b03d47b" Nov 29 07:52:21 crc kubenswrapper[4731]: I1129 07:52:21.669856 4731 scope.go:117] "RemoveContainer" containerID="b37bb08a1b5e55f098aea20fc3e2617f2937a02c92f167480e162d95f6ed60fc" Nov 29 07:52:21 crc kubenswrapper[4731]: I1129 07:52:21.700090 4731 scope.go:117] "RemoveContainer" containerID="5e7d4c3e6a6d07bb837309b9136669ed37ea51ac6619d762c9a0fe9fae8dcc39" Nov 29 07:53:33 crc kubenswrapper[4731]: I1129 07:53:33.002873 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:53:33 crc kubenswrapper[4731]: I1129 07:53:33.003689 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:54:03 crc kubenswrapper[4731]: I1129 07:54:03.003188 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:54:03 crc kubenswrapper[4731]: I1129 07:54:03.003975 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.503618 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s4tb2"] Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.507036 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.525159 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4tb2"] Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.657262 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-utilities\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.657439 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-catalog-content\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.657483 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvbw8\" (UniqueName: \"kubernetes.io/projected/dd7f6d9e-c491-4dca-a54a-01155559175f-kube-api-access-gvbw8\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.761081 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-catalog-content\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.761169 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvbw8\" (UniqueName: \"kubernetes.io/projected/dd7f6d9e-c491-4dca-a54a-01155559175f-kube-api-access-gvbw8\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.761270 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-utilities\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.761751 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-catalog-content\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.762743 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-utilities\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.784163 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvbw8\" (UniqueName: \"kubernetes.io/projected/dd7f6d9e-c491-4dca-a54a-01155559175f-kube-api-access-gvbw8\") pod \"certified-operators-s4tb2\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:10 crc kubenswrapper[4731]: I1129 07:54:10.835706 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:11 crc kubenswrapper[4731]: I1129 07:54:11.362418 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4tb2"] Nov 29 07:54:11 crc kubenswrapper[4731]: W1129 07:54:11.382052 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd7f6d9e_c491_4dca_a54a_01155559175f.slice/crio-9220f2ce2fdb16682209e99294b173b7a93aa8ccade13df4bbf5eff68c989386 WatchSource:0}: Error finding container 9220f2ce2fdb16682209e99294b173b7a93aa8ccade13df4bbf5eff68c989386: Status 404 returned error can't find the container with id 9220f2ce2fdb16682209e99294b173b7a93aa8ccade13df4bbf5eff68c989386 Nov 29 07:54:11 crc kubenswrapper[4731]: I1129 07:54:11.677976 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4tb2" event={"ID":"dd7f6d9e-c491-4dca-a54a-01155559175f","Type":"ContainerStarted","Data":"eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15"} Nov 29 07:54:11 crc kubenswrapper[4731]: I1129 07:54:11.678041 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4tb2" event={"ID":"dd7f6d9e-c491-4dca-a54a-01155559175f","Type":"ContainerStarted","Data":"9220f2ce2fdb16682209e99294b173b7a93aa8ccade13df4bbf5eff68c989386"} Nov 29 07:54:12 crc kubenswrapper[4731]: I1129 07:54:12.691918 4731 generic.go:334] "Generic (PLEG): container finished" podID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerID="eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15" exitCode=0 Nov 29 07:54:12 crc kubenswrapper[4731]: I1129 07:54:12.691988 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4tb2" event={"ID":"dd7f6d9e-c491-4dca-a54a-01155559175f","Type":"ContainerDied","Data":"eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15"} Nov 29 07:54:12 crc kubenswrapper[4731]: I1129 07:54:12.696655 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:54:15 crc kubenswrapper[4731]: I1129 07:54:15.723332 4731 generic.go:334] "Generic (PLEG): container finished" podID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerID="7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1" exitCode=0 Nov 29 07:54:15 crc kubenswrapper[4731]: I1129 07:54:15.723472 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4tb2" event={"ID":"dd7f6d9e-c491-4dca-a54a-01155559175f","Type":"ContainerDied","Data":"7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1"} Nov 29 07:54:17 crc kubenswrapper[4731]: I1129 07:54:17.746808 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4tb2" event={"ID":"dd7f6d9e-c491-4dca-a54a-01155559175f","Type":"ContainerStarted","Data":"b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804"} Nov 29 07:54:17 crc kubenswrapper[4731]: I1129 07:54:17.770138 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s4tb2" podStartSLOduration=3.926818423 podStartE2EDuration="7.770115569s" podCreationTimestamp="2025-11-29 07:54:10 +0000 UTC" firstStartedPulling="2025-11-29 07:54:12.696308232 +0000 UTC m=+2891.586669335" lastFinishedPulling="2025-11-29 07:54:16.539605378 +0000 UTC m=+2895.429966481" observedRunningTime="2025-11-29 07:54:17.767835403 +0000 UTC m=+2896.658196506" watchObservedRunningTime="2025-11-29 07:54:17.770115569 +0000 UTC m=+2896.660476672" Nov 29 07:54:20 crc kubenswrapper[4731]: I1129 07:54:20.774780 4731 generic.go:334] "Generic (PLEG): container finished" podID="7e587ad4-40e6-4719-a23b-ff5035f40152" containerID="7967ef3c11ac89bd76140754b42c994b5b7ea3aa62c640cb19e6b1dba21fba02" exitCode=0 Nov 29 07:54:20 crc kubenswrapper[4731]: I1129 07:54:20.774885 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" event={"ID":"7e587ad4-40e6-4719-a23b-ff5035f40152","Type":"ContainerDied","Data":"7967ef3c11ac89bd76140754b42c994b5b7ea3aa62c640cb19e6b1dba21fba02"} Nov 29 07:54:20 crc kubenswrapper[4731]: I1129 07:54:20.836592 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:20 crc kubenswrapper[4731]: I1129 07:54:20.836643 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:20 crc kubenswrapper[4731]: I1129 07:54:20.889226 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:21 crc kubenswrapper[4731]: I1129 07:54:21.858748 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:21 crc kubenswrapper[4731]: I1129 07:54:21.916590 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4tb2"] Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.222458 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.304187 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ssh-key\") pod \"7e587ad4-40e6-4719-a23b-ff5035f40152\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.304273 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-inventory\") pod \"7e587ad4-40e6-4719-a23b-ff5035f40152\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.304340 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-2\") pod \"7e587ad4-40e6-4719-a23b-ff5035f40152\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.304369 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-0\") pod \"7e587ad4-40e6-4719-a23b-ff5035f40152\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.304426 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-telemetry-combined-ca-bundle\") pod \"7e587ad4-40e6-4719-a23b-ff5035f40152\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.304447 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7bp6\" (UniqueName: \"kubernetes.io/projected/7e587ad4-40e6-4719-a23b-ff5035f40152-kube-api-access-f7bp6\") pod \"7e587ad4-40e6-4719-a23b-ff5035f40152\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.304496 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-1\") pod \"7e587ad4-40e6-4719-a23b-ff5035f40152\" (UID: \"7e587ad4-40e6-4719-a23b-ff5035f40152\") " Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.311741 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7e587ad4-40e6-4719-a23b-ff5035f40152" (UID: "7e587ad4-40e6-4719-a23b-ff5035f40152"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.311796 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e587ad4-40e6-4719-a23b-ff5035f40152-kube-api-access-f7bp6" (OuterVolumeSpecName: "kube-api-access-f7bp6") pod "7e587ad4-40e6-4719-a23b-ff5035f40152" (UID: "7e587ad4-40e6-4719-a23b-ff5035f40152"). InnerVolumeSpecName "kube-api-access-f7bp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.341165 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7e587ad4-40e6-4719-a23b-ff5035f40152" (UID: "7e587ad4-40e6-4719-a23b-ff5035f40152"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.341202 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7e587ad4-40e6-4719-a23b-ff5035f40152" (UID: "7e587ad4-40e6-4719-a23b-ff5035f40152"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.341258 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7e587ad4-40e6-4719-a23b-ff5035f40152" (UID: "7e587ad4-40e6-4719-a23b-ff5035f40152"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.344482 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7e587ad4-40e6-4719-a23b-ff5035f40152" (UID: "7e587ad4-40e6-4719-a23b-ff5035f40152"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.352591 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-inventory" (OuterVolumeSpecName: "inventory") pod "7e587ad4-40e6-4719-a23b-ff5035f40152" (UID: "7e587ad4-40e6-4719-a23b-ff5035f40152"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.406509 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.406549 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.406560 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.406587 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.406599 4731 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.406611 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7bp6\" (UniqueName: \"kubernetes.io/projected/7e587ad4-40e6-4719-a23b-ff5035f40152-kube-api-access-f7bp6\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.406625 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e587ad4-40e6-4719-a23b-ff5035f40152-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.803494 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" event={"ID":"7e587ad4-40e6-4719-a23b-ff5035f40152","Type":"ContainerDied","Data":"64b33458b048e4bc2d3f67bce0fd61e8dcd46daa8f1700959f47a349bc7da4cb"} Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.803614 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b33458b048e4bc2d3f67bce0fd61e8dcd46daa8f1700959f47a349bc7da4cb" Nov 29 07:54:22 crc kubenswrapper[4731]: I1129 07:54:22.803626 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2" Nov 29 07:54:23 crc kubenswrapper[4731]: I1129 07:54:23.824498 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s4tb2" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerName="registry-server" containerID="cri-o://b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804" gracePeriod=2 Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.275189 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.346365 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvbw8\" (UniqueName: \"kubernetes.io/projected/dd7f6d9e-c491-4dca-a54a-01155559175f-kube-api-access-gvbw8\") pod \"dd7f6d9e-c491-4dca-a54a-01155559175f\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.346720 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-utilities\") pod \"dd7f6d9e-c491-4dca-a54a-01155559175f\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.346848 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-catalog-content\") pod \"dd7f6d9e-c491-4dca-a54a-01155559175f\" (UID: \"dd7f6d9e-c491-4dca-a54a-01155559175f\") " Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.347703 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-utilities" (OuterVolumeSpecName: "utilities") pod "dd7f6d9e-c491-4dca-a54a-01155559175f" (UID: "dd7f6d9e-c491-4dca-a54a-01155559175f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.367277 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7f6d9e-c491-4dca-a54a-01155559175f-kube-api-access-gvbw8" (OuterVolumeSpecName: "kube-api-access-gvbw8") pod "dd7f6d9e-c491-4dca-a54a-01155559175f" (UID: "dd7f6d9e-c491-4dca-a54a-01155559175f"). InnerVolumeSpecName "kube-api-access-gvbw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.412503 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd7f6d9e-c491-4dca-a54a-01155559175f" (UID: "dd7f6d9e-c491-4dca-a54a-01155559175f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.449482 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.449515 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7f6d9e-c491-4dca-a54a-01155559175f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.449527 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvbw8\" (UniqueName: \"kubernetes.io/projected/dd7f6d9e-c491-4dca-a54a-01155559175f-kube-api-access-gvbw8\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.840172 4731 generic.go:334] "Generic (PLEG): container finished" podID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerID="b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804" exitCode=0 Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.840233 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4tb2" event={"ID":"dd7f6d9e-c491-4dca-a54a-01155559175f","Type":"ContainerDied","Data":"b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804"} Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.840257 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4tb2" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.840279 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4tb2" event={"ID":"dd7f6d9e-c491-4dca-a54a-01155559175f","Type":"ContainerDied","Data":"9220f2ce2fdb16682209e99294b173b7a93aa8ccade13df4bbf5eff68c989386"} Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.840314 4731 scope.go:117] "RemoveContainer" containerID="b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.870342 4731 scope.go:117] "RemoveContainer" containerID="7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.890101 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4tb2"] Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.900586 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s4tb2"] Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.917710 4731 scope.go:117] "RemoveContainer" containerID="eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.974868 4731 scope.go:117] "RemoveContainer" containerID="b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804" Nov 29 07:54:24 crc kubenswrapper[4731]: E1129 07:54:24.975720 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804\": container with ID starting with b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804 not found: ID does not exist" containerID="b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.975779 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804"} err="failed to get container status \"b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804\": rpc error: code = NotFound desc = could not find container \"b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804\": container with ID starting with b97130757fe13166fa212b84d4654b88a938add8c0ed8c4c1adc0fb2afad0804 not found: ID does not exist" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.975816 4731 scope.go:117] "RemoveContainer" containerID="7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1" Nov 29 07:54:24 crc kubenswrapper[4731]: E1129 07:54:24.976713 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1\": container with ID starting with 7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1 not found: ID does not exist" containerID="7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.976780 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1"} err="failed to get container status \"7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1\": rpc error: code = NotFound desc = could not find container \"7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1\": container with ID starting with 7218437475d1162db073e5a87aa3b1754dbadb9aa7c10322e83ecd16c23cd0d1 not found: ID does not exist" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.976822 4731 scope.go:117] "RemoveContainer" containerID="eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15" Nov 29 07:54:24 crc kubenswrapper[4731]: E1129 07:54:24.979231 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15\": container with ID starting with eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15 not found: ID does not exist" containerID="eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15" Nov 29 07:54:24 crc kubenswrapper[4731]: I1129 07:54:24.979270 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15"} err="failed to get container status \"eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15\": rpc error: code = NotFound desc = could not find container \"eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15\": container with ID starting with eaaad141da0ed697bff4c9190a9a278dd866e295f3d41b976be99de1f44d4a15 not found: ID does not exist" Nov 29 07:54:25 crc kubenswrapper[4731]: I1129 07:54:25.824417 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" path="/var/lib/kubelet/pods/dd7f6d9e-c491-4dca-a54a-01155559175f/volumes" Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.003073 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.003556 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.003653 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.004487 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.004546 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" gracePeriod=600 Nov 29 07:54:33 crc kubenswrapper[4731]: E1129 07:54:33.127849 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.942730 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" exitCode=0 Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.942847 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f"} Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.943077 4731 scope.go:117] "RemoveContainer" containerID="3c519b59b3163c99f3ed432a57f0193c05d933b6a8bb33a617a562a4fda90905" Nov 29 07:54:33 crc kubenswrapper[4731]: I1129 07:54:33.943871 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:54:33 crc kubenswrapper[4731]: E1129 07:54:33.944213 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:54:46 crc kubenswrapper[4731]: I1129 07:54:46.806861 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:54:46 crc kubenswrapper[4731]: E1129 07:54:46.807847 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:55:01 crc kubenswrapper[4731]: I1129 07:55:01.815766 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:55:01 crc kubenswrapper[4731]: E1129 07:55:01.816525 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.517505 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:55:15 crc kubenswrapper[4731]: E1129 07:55:15.518438 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerName="registry-server" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.518453 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerName="registry-server" Nov 29 07:55:15 crc kubenswrapper[4731]: E1129 07:55:15.518472 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerName="extract-utilities" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.518479 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerName="extract-utilities" Nov 29 07:55:15 crc kubenswrapper[4731]: E1129 07:55:15.518494 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e587ad4-40e6-4719-a23b-ff5035f40152" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.518504 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e587ad4-40e6-4719-a23b-ff5035f40152" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:15 crc kubenswrapper[4731]: E1129 07:55:15.518530 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerName="extract-content" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.518537 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerName="extract-content" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.518743 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7f6d9e-c491-4dca-a54a-01155559175f" containerName="registry-server" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.518770 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e587ad4-40e6-4719-a23b-ff5035f40152" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.519458 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.519546 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.547670 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.548421 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.548945 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.548945 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-x22gz" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.554255 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.554336 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-config-data\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.554461 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658448 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658550 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658652 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-config-data\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658696 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658776 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658826 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658879 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658950 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kkhh\" (UniqueName: \"kubernetes.io/projected/a75de2e0-7593-49ac-bcf7-41705892c633-kube-api-access-4kkhh\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.658979 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.660325 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.662205 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-config-data\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.669180 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.760384 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.760458 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kkhh\" (UniqueName: \"kubernetes.io/projected/a75de2e0-7593-49ac-bcf7-41705892c633-kube-api-access-4kkhh\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.760582 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.760630 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.760680 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.760703 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.760985 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.761078 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.761447 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.766314 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.766429 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.781267 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kkhh\" (UniqueName: \"kubernetes.io/projected/a75de2e0-7593-49ac-bcf7-41705892c633-kube-api-access-4kkhh\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.789164 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " pod="openstack/tempest-tests-tempest" Nov 29 07:55:15 crc kubenswrapper[4731]: I1129 07:55:15.870533 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 07:55:16 crc kubenswrapper[4731]: I1129 07:55:16.375197 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:55:16 crc kubenswrapper[4731]: W1129 07:55:16.378494 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75de2e0_7593_49ac_bcf7_41705892c633.slice/crio-908ec4c1364df4c05b0de62ccd2b762beb8942a34414a48b7655ed618667a7b1 WatchSource:0}: Error finding container 908ec4c1364df4c05b0de62ccd2b762beb8942a34414a48b7655ed618667a7b1: Status 404 returned error can't find the container with id 908ec4c1364df4c05b0de62ccd2b762beb8942a34414a48b7655ed618667a7b1 Nov 29 07:55:16 crc kubenswrapper[4731]: I1129 07:55:16.806941 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:55:16 crc kubenswrapper[4731]: E1129 07:55:16.807232 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:55:17 crc kubenswrapper[4731]: I1129 07:55:17.392501 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a75de2e0-7593-49ac-bcf7-41705892c633","Type":"ContainerStarted","Data":"908ec4c1364df4c05b0de62ccd2b762beb8942a34414a48b7655ed618667a7b1"} Nov 29 07:55:27 crc kubenswrapper[4731]: I1129 07:55:27.807885 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:55:27 crc kubenswrapper[4731]: E1129 07:55:27.809083 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.336106 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rtb5d"] Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.339856 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.348891 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtb5d"] Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.389723 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-utilities\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.389788 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqjr\" (UniqueName: \"kubernetes.io/projected/129a2179-4f8a-4b59-b7cb-a83e80d76c84-kube-api-access-hjqjr\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.389855 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-catalog-content\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.491642 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqjr\" (UniqueName: \"kubernetes.io/projected/129a2179-4f8a-4b59-b7cb-a83e80d76c84-kube-api-access-hjqjr\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.491730 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-catalog-content\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.491907 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-utilities\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.492300 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-catalog-content\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.492314 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-utilities\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.515276 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqjr\" (UniqueName: \"kubernetes.io/projected/129a2179-4f8a-4b59-b7cb-a83e80d76c84-kube-api-access-hjqjr\") pod \"community-operators-rtb5d\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:40 crc kubenswrapper[4731]: I1129 07:55:40.672968 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:55:41 crc kubenswrapper[4731]: I1129 07:55:41.814843 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:55:41 crc kubenswrapper[4731]: E1129 07:55:41.815131 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:55:49 crc kubenswrapper[4731]: E1129 07:55:49.625674 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 29 07:55:49 crc kubenswrapper[4731]: E1129 07:55:49.632656 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kkhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a75de2e0-7593-49ac-bcf7-41705892c633): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:55:49 crc kubenswrapper[4731]: E1129 07:55:49.634251 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a75de2e0-7593-49ac-bcf7-41705892c633" Nov 29 07:55:49 crc kubenswrapper[4731]: E1129 07:55:49.836937 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a75de2e0-7593-49ac-bcf7-41705892c633" Nov 29 07:55:50 crc kubenswrapper[4731]: I1129 07:55:50.023837 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtb5d"] Nov 29 07:55:50 crc kubenswrapper[4731]: I1129 07:55:50.847197 4731 generic.go:334] "Generic (PLEG): container finished" podID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerID="ab7b6c1b8953371e02abb805cc4adce789cf27d80db4c230ba4b9cd7619a4937" exitCode=0 Nov 29 07:55:50 crc kubenswrapper[4731]: I1129 07:55:50.847292 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtb5d" event={"ID":"129a2179-4f8a-4b59-b7cb-a83e80d76c84","Type":"ContainerDied","Data":"ab7b6c1b8953371e02abb805cc4adce789cf27d80db4c230ba4b9cd7619a4937"} Nov 29 07:55:50 crc kubenswrapper[4731]: I1129 07:55:50.847686 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtb5d" event={"ID":"129a2179-4f8a-4b59-b7cb-a83e80d76c84","Type":"ContainerStarted","Data":"59d466686d90a5e7ac2df159d8fb74264428a264d13428bac9b2436cb61644bf"} Nov 29 07:55:51 crc kubenswrapper[4731]: I1129 07:55:51.860769 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtb5d" event={"ID":"129a2179-4f8a-4b59-b7cb-a83e80d76c84","Type":"ContainerStarted","Data":"7ed3c599dcc61b6a0797da57e1f1b83c5e69b3d82c74f161836103454ecc24ed"} Nov 29 07:55:52 crc kubenswrapper[4731]: I1129 07:55:52.872682 4731 generic.go:334] "Generic (PLEG): container finished" podID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerID="7ed3c599dcc61b6a0797da57e1f1b83c5e69b3d82c74f161836103454ecc24ed" exitCode=0 Nov 29 07:55:52 crc kubenswrapper[4731]: I1129 07:55:52.873706 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtb5d" event={"ID":"129a2179-4f8a-4b59-b7cb-a83e80d76c84","Type":"ContainerDied","Data":"7ed3c599dcc61b6a0797da57e1f1b83c5e69b3d82c74f161836103454ecc24ed"} Nov 29 07:55:53 crc kubenswrapper[4731]: I1129 07:55:53.884492 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtb5d" event={"ID":"129a2179-4f8a-4b59-b7cb-a83e80d76c84","Type":"ContainerStarted","Data":"12cc63d4fbdee8613abdac6be8d9eba76a1cb5c67ee50177b9217e3d3d7345a5"} Nov 29 07:55:53 crc kubenswrapper[4731]: I1129 07:55:53.911516 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rtb5d" podStartSLOduration=11.393706427 podStartE2EDuration="13.911493293s" podCreationTimestamp="2025-11-29 07:55:40 +0000 UTC" firstStartedPulling="2025-11-29 07:55:50.851629363 +0000 UTC m=+2989.741990506" lastFinishedPulling="2025-11-29 07:55:53.369416279 +0000 UTC m=+2992.259777372" observedRunningTime="2025-11-29 07:55:53.903258827 +0000 UTC m=+2992.793619940" watchObservedRunningTime="2025-11-29 07:55:53.911493293 +0000 UTC m=+2992.801854386" Nov 29 07:55:55 crc kubenswrapper[4731]: I1129 07:55:55.808237 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:55:55 crc kubenswrapper[4731]: E1129 07:55:55.809538 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:56:00 crc kubenswrapper[4731]: I1129 07:56:00.673459 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:56:00 crc kubenswrapper[4731]: I1129 07:56:00.674206 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:56:00 crc kubenswrapper[4731]: I1129 07:56:00.731540 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:56:01 crc kubenswrapper[4731]: I1129 07:56:01.021809 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:56:01 crc kubenswrapper[4731]: I1129 07:56:01.081273 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rtb5d"] Nov 29 07:56:02 crc kubenswrapper[4731]: I1129 07:56:02.973684 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rtb5d" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerName="registry-server" containerID="cri-o://12cc63d4fbdee8613abdac6be8d9eba76a1cb5c67ee50177b9217e3d3d7345a5" gracePeriod=2 Nov 29 07:56:03 crc kubenswrapper[4731]: I1129 07:56:03.987993 4731 generic.go:334] "Generic (PLEG): container finished" podID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerID="12cc63d4fbdee8613abdac6be8d9eba76a1cb5c67ee50177b9217e3d3d7345a5" exitCode=0 Nov 29 07:56:03 crc kubenswrapper[4731]: I1129 07:56:03.988108 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtb5d" event={"ID":"129a2179-4f8a-4b59-b7cb-a83e80d76c84","Type":"ContainerDied","Data":"12cc63d4fbdee8613abdac6be8d9eba76a1cb5c67ee50177b9217e3d3d7345a5"} Nov 29 07:56:05 crc kubenswrapper[4731]: I1129 07:56:05.614819 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 07:56:05 crc kubenswrapper[4731]: I1129 07:56:05.846815 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:56:05 crc kubenswrapper[4731]: I1129 07:56:05.946462 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-utilities\") pod \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " Nov 29 07:56:05 crc kubenswrapper[4731]: I1129 07:56:05.946554 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-catalog-content\") pod \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " Nov 29 07:56:05 crc kubenswrapper[4731]: I1129 07:56:05.946804 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjqjr\" (UniqueName: \"kubernetes.io/projected/129a2179-4f8a-4b59-b7cb-a83e80d76c84-kube-api-access-hjqjr\") pod \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\" (UID: \"129a2179-4f8a-4b59-b7cb-a83e80d76c84\") " Nov 29 07:56:05 crc kubenswrapper[4731]: I1129 07:56:05.948105 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-utilities" (OuterVolumeSpecName: "utilities") pod "129a2179-4f8a-4b59-b7cb-a83e80d76c84" (UID: "129a2179-4f8a-4b59-b7cb-a83e80d76c84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:56:05 crc kubenswrapper[4731]: I1129 07:56:05.957842 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129a2179-4f8a-4b59-b7cb-a83e80d76c84-kube-api-access-hjqjr" (OuterVolumeSpecName: "kube-api-access-hjqjr") pod "129a2179-4f8a-4b59-b7cb-a83e80d76c84" (UID: "129a2179-4f8a-4b59-b7cb-a83e80d76c84"). InnerVolumeSpecName "kube-api-access-hjqjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.009691 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "129a2179-4f8a-4b59-b7cb-a83e80d76c84" (UID: "129a2179-4f8a-4b59-b7cb-a83e80d76c84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.016845 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtb5d" event={"ID":"129a2179-4f8a-4b59-b7cb-a83e80d76c84","Type":"ContainerDied","Data":"59d466686d90a5e7ac2df159d8fb74264428a264d13428bac9b2436cb61644bf"} Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.017048 4731 scope.go:117] "RemoveContainer" containerID="12cc63d4fbdee8613abdac6be8d9eba76a1cb5c67ee50177b9217e3d3d7345a5" Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.016913 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtb5d" Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.040301 4731 scope.go:117] "RemoveContainer" containerID="7ed3c599dcc61b6a0797da57e1f1b83c5e69b3d82c74f161836103454ecc24ed" Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.053026 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.053247 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a2179-4f8a-4b59-b7cb-a83e80d76c84-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.053354 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjqjr\" (UniqueName: \"kubernetes.io/projected/129a2179-4f8a-4b59-b7cb-a83e80d76c84-kube-api-access-hjqjr\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.064531 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rtb5d"] Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.073431 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rtb5d"] Nov 29 07:56:06 crc kubenswrapper[4731]: I1129 07:56:06.091542 4731 scope.go:117] "RemoveContainer" containerID="ab7b6c1b8953371e02abb805cc4adce789cf27d80db4c230ba4b9cd7619a4937" Nov 29 07:56:07 crc kubenswrapper[4731]: I1129 07:56:07.035517 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a75de2e0-7593-49ac-bcf7-41705892c633","Type":"ContainerStarted","Data":"d40bb357639cec64226a0e845522fae24e142f391b1fc3539e63cd70f594f11f"} Nov 29 07:56:07 crc kubenswrapper[4731]: I1129 07:56:07.073836 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.844167563 podStartE2EDuration="53.073802675s" podCreationTimestamp="2025-11-29 07:55:14 +0000 UTC" firstStartedPulling="2025-11-29 07:55:16.382388836 +0000 UTC m=+2955.272749949" lastFinishedPulling="2025-11-29 07:56:05.612023958 +0000 UTC m=+3004.502385061" observedRunningTime="2025-11-29 07:56:07.0548386 +0000 UTC m=+3005.945199693" watchObservedRunningTime="2025-11-29 07:56:07.073802675 +0000 UTC m=+3005.964163788" Nov 29 07:56:07 crc kubenswrapper[4731]: I1129 07:56:07.807089 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:56:07 crc kubenswrapper[4731]: E1129 07:56:07.807415 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:56:07 crc kubenswrapper[4731]: I1129 07:56:07.818225 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" path="/var/lib/kubelet/pods/129a2179-4f8a-4b59-b7cb-a83e80d76c84/volumes" Nov 29 07:56:21 crc kubenswrapper[4731]: I1129 07:56:21.818725 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:56:21 crc kubenswrapper[4731]: E1129 07:56:21.821602 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:56:36 crc kubenswrapper[4731]: I1129 07:56:36.858483 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:56:36 crc kubenswrapper[4731]: E1129 07:56:36.859226 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:56:47 crc kubenswrapper[4731]: I1129 07:56:47.807597 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:56:47 crc kubenswrapper[4731]: E1129 07:56:47.810493 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:57:00 crc kubenswrapper[4731]: I1129 07:57:00.806789 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:57:00 crc kubenswrapper[4731]: E1129 07:57:00.807648 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:57:12 crc kubenswrapper[4731]: I1129 07:57:12.807218 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:57:12 crc kubenswrapper[4731]: E1129 07:57:12.808231 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:57:26 crc kubenswrapper[4731]: I1129 07:57:26.806916 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:57:26 crc kubenswrapper[4731]: E1129 07:57:26.807648 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:57:37 crc kubenswrapper[4731]: I1129 07:57:37.807497 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:57:37 crc kubenswrapper[4731]: E1129 07:57:37.808382 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:57:49 crc kubenswrapper[4731]: I1129 07:57:49.807315 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:57:49 crc kubenswrapper[4731]: E1129 07:57:49.808203 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:58:01 crc kubenswrapper[4731]: I1129 07:58:01.826101 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:58:01 crc kubenswrapper[4731]: E1129 07:58:01.827478 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:58:13 crc kubenswrapper[4731]: I1129 07:58:13.807184 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:58:13 crc kubenswrapper[4731]: E1129 07:58:13.808074 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:58:28 crc kubenswrapper[4731]: I1129 07:58:28.807288 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:58:28 crc kubenswrapper[4731]: E1129 07:58:28.808145 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:58:41 crc kubenswrapper[4731]: I1129 07:58:41.813783 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:58:41 crc kubenswrapper[4731]: E1129 07:58:41.814528 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:58:54 crc kubenswrapper[4731]: I1129 07:58:54.807025 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:58:54 crc kubenswrapper[4731]: E1129 07:58:54.807878 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:59:05 crc kubenswrapper[4731]: I1129 07:59:05.807811 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:59:05 crc kubenswrapper[4731]: E1129 07:59:05.808613 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:59:20 crc kubenswrapper[4731]: I1129 07:59:20.806905 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:59:20 crc kubenswrapper[4731]: E1129 07:59:20.807913 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:59:31 crc kubenswrapper[4731]: I1129 07:59:31.813710 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:59:31 crc kubenswrapper[4731]: E1129 07:59:31.814537 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 07:59:44 crc kubenswrapper[4731]: I1129 07:59:44.807071 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 07:59:45 crc kubenswrapper[4731]: I1129 07:59:45.343298 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"c7a11895e241cfc66bf29bd6921c4df58deeaf89420e29adc1811493afd2519c"} Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.155967 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b"] Nov 29 08:00:00 crc kubenswrapper[4731]: E1129 08:00:00.156978 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerName="extract-utilities" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.156998 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerName="extract-utilities" Nov 29 08:00:00 crc kubenswrapper[4731]: E1129 08:00:00.157053 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerName="extract-content" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.157061 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerName="extract-content" Nov 29 08:00:00 crc kubenswrapper[4731]: E1129 08:00:00.157079 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerName="registry-server" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.157088 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerName="registry-server" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.157302 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="129a2179-4f8a-4b59-b7cb-a83e80d76c84" containerName="registry-server" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.158137 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.160379 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.160700 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.168311 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b"] Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.304172 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-config-volume\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.304247 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwxsc\" (UniqueName: \"kubernetes.io/projected/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-kube-api-access-gwxsc\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.304644 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-secret-volume\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.406447 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-secret-volume\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.406639 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-config-volume\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.406699 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwxsc\" (UniqueName: \"kubernetes.io/projected/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-kube-api-access-gwxsc\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.407738 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-config-volume\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.417821 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-secret-volume\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.434371 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwxsc\" (UniqueName: \"kubernetes.io/projected/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-kube-api-access-gwxsc\") pod \"collect-profiles-29406720-mcf6b\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.480943 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:00 crc kubenswrapper[4731]: W1129 08:00:00.953549 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda39b5c6_1c27_4018_b4be_3dc9ee9939a6.slice/crio-99d29474df7542dbd7165679e5348ba8be37196197a7f8f450613cb069692cbf WatchSource:0}: Error finding container 99d29474df7542dbd7165679e5348ba8be37196197a7f8f450613cb069692cbf: Status 404 returned error can't find the container with id 99d29474df7542dbd7165679e5348ba8be37196197a7f8f450613cb069692cbf Nov 29 08:00:00 crc kubenswrapper[4731]: I1129 08:00:00.956839 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b"] Nov 29 08:00:01 crc kubenswrapper[4731]: I1129 08:00:01.490589 4731 generic.go:334] "Generic (PLEG): container finished" podID="da39b5c6-1c27-4018-b4be-3dc9ee9939a6" containerID="66ca6e205a8896da437474bea582f246c34c1ee3f0982527892fb935045cbd65" exitCode=0 Nov 29 08:00:01 crc kubenswrapper[4731]: I1129 08:00:01.490731 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" event={"ID":"da39b5c6-1c27-4018-b4be-3dc9ee9939a6","Type":"ContainerDied","Data":"66ca6e205a8896da437474bea582f246c34c1ee3f0982527892fb935045cbd65"} Nov 29 08:00:01 crc kubenswrapper[4731]: I1129 08:00:01.490921 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" event={"ID":"da39b5c6-1c27-4018-b4be-3dc9ee9939a6","Type":"ContainerStarted","Data":"99d29474df7542dbd7165679e5348ba8be37196197a7f8f450613cb069692cbf"} Nov 29 08:00:02 crc kubenswrapper[4731]: I1129 08:00:02.905938 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:02 crc kubenswrapper[4731]: I1129 08:00:02.981217 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-config-volume\") pod \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " Nov 29 08:00:02 crc kubenswrapper[4731]: I1129 08:00:02.981282 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-secret-volume\") pod \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " Nov 29 08:00:02 crc kubenswrapper[4731]: I1129 08:00:02.981340 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwxsc\" (UniqueName: \"kubernetes.io/projected/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-kube-api-access-gwxsc\") pod \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\" (UID: \"da39b5c6-1c27-4018-b4be-3dc9ee9939a6\") " Nov 29 08:00:02 crc kubenswrapper[4731]: I1129 08:00:02.982343 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "da39b5c6-1c27-4018-b4be-3dc9ee9939a6" (UID: "da39b5c6-1c27-4018-b4be-3dc9ee9939a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:00:02 crc kubenswrapper[4731]: I1129 08:00:02.989865 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da39b5c6-1c27-4018-b4be-3dc9ee9939a6" (UID: "da39b5c6-1c27-4018-b4be-3dc9ee9939a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:02 crc kubenswrapper[4731]: I1129 08:00:02.990009 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-kube-api-access-gwxsc" (OuterVolumeSpecName: "kube-api-access-gwxsc") pod "da39b5c6-1c27-4018-b4be-3dc9ee9939a6" (UID: "da39b5c6-1c27-4018-b4be-3dc9ee9939a6"). InnerVolumeSpecName "kube-api-access-gwxsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:03 crc kubenswrapper[4731]: I1129 08:00:03.083348 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4731]: I1129 08:00:03.083404 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4731]: I1129 08:00:03.083418 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwxsc\" (UniqueName: \"kubernetes.io/projected/da39b5c6-1c27-4018-b4be-3dc9ee9939a6-kube-api-access-gwxsc\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4731]: I1129 08:00:03.512530 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" event={"ID":"da39b5c6-1c27-4018-b4be-3dc9ee9939a6","Type":"ContainerDied","Data":"99d29474df7542dbd7165679e5348ba8be37196197a7f8f450613cb069692cbf"} Nov 29 08:00:03 crc kubenswrapper[4731]: I1129 08:00:03.512591 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d29474df7542dbd7165679e5348ba8be37196197a7f8f450613cb069692cbf" Nov 29 08:00:03 crc kubenswrapper[4731]: I1129 08:00:03.512612 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-mcf6b" Nov 29 08:00:03 crc kubenswrapper[4731]: I1129 08:00:03.991416 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq"] Nov 29 08:00:04 crc kubenswrapper[4731]: I1129 08:00:04.002537 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-lnkxq"] Nov 29 08:00:05 crc kubenswrapper[4731]: I1129 08:00:05.831156 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3389ba-bb37-48d3-b029-f6e492b6152a" path="/var/lib/kubelet/pods/6d3389ba-bb37-48d3-b029-f6e492b6152a/volumes" Nov 29 08:00:22 crc kubenswrapper[4731]: I1129 08:00:22.346097 4731 scope.go:117] "RemoveContainer" containerID="4f0acd0dd530dc72288b814e42cf3f3b431537d0f1e39a57371daf01b9dd95c8" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.354499 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8rh9d"] Nov 29 08:00:44 crc kubenswrapper[4731]: E1129 08:00:44.357324 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da39b5c6-1c27-4018-b4be-3dc9ee9939a6" containerName="collect-profiles" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.357472 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="da39b5c6-1c27-4018-b4be-3dc9ee9939a6" containerName="collect-profiles" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.357932 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="da39b5c6-1c27-4018-b4be-3dc9ee9939a6" containerName="collect-profiles" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.360392 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.363385 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqb2\" (UniqueName: \"kubernetes.io/projected/06b27e8f-1170-4a0a-bf45-028912f417aa-kube-api-access-rmqb2\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.363658 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-utilities\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.363896 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-catalog-content\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.376035 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rh9d"] Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.465884 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmqb2\" (UniqueName: \"kubernetes.io/projected/06b27e8f-1170-4a0a-bf45-028912f417aa-kube-api-access-rmqb2\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.466239 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-utilities\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.466344 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-catalog-content\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.467098 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-catalog-content\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.467098 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-utilities\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.489041 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmqb2\" (UniqueName: \"kubernetes.io/projected/06b27e8f-1170-4a0a-bf45-028912f417aa-kube-api-access-rmqb2\") pod \"redhat-operators-8rh9d\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:44 crc kubenswrapper[4731]: I1129 08:00:44.690858 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:00:45 crc kubenswrapper[4731]: I1129 08:00:45.235188 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rh9d"] Nov 29 08:00:45 crc kubenswrapper[4731]: I1129 08:00:45.966373 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rh9d" event={"ID":"06b27e8f-1170-4a0a-bf45-028912f417aa","Type":"ContainerStarted","Data":"bc37b41ae534e08cfda5dab8d0f8d85ce2133619943998b7221bc1e85e19ce01"} Nov 29 08:00:46 crc kubenswrapper[4731]: I1129 08:00:46.978471 4731 generic.go:334] "Generic (PLEG): container finished" podID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerID="c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88" exitCode=0 Nov 29 08:00:46 crc kubenswrapper[4731]: I1129 08:00:46.978756 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rh9d" event={"ID":"06b27e8f-1170-4a0a-bf45-028912f417aa","Type":"ContainerDied","Data":"c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88"} Nov 29 08:00:46 crc kubenswrapper[4731]: I1129 08:00:46.981448 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:00:51 crc kubenswrapper[4731]: I1129 08:00:51.025016 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rh9d" event={"ID":"06b27e8f-1170-4a0a-bf45-028912f417aa","Type":"ContainerStarted","Data":"29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef"} Nov 29 08:00:55 crc kubenswrapper[4731]: I1129 08:00:55.070344 4731 generic.go:334] "Generic (PLEG): container finished" podID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerID="29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef" exitCode=0 Nov 29 08:00:55 crc kubenswrapper[4731]: I1129 08:00:55.070454 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rh9d" event={"ID":"06b27e8f-1170-4a0a-bf45-028912f417aa","Type":"ContainerDied","Data":"29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef"} Nov 29 08:00:57 crc kubenswrapper[4731]: I1129 08:00:57.092421 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rh9d" event={"ID":"06b27e8f-1170-4a0a-bf45-028912f417aa","Type":"ContainerStarted","Data":"55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058"} Nov 29 08:00:57 crc kubenswrapper[4731]: I1129 08:00:57.122106 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8rh9d" podStartSLOduration=3.742921698 podStartE2EDuration="13.12207882s" podCreationTimestamp="2025-11-29 08:00:44 +0000 UTC" firstStartedPulling="2025-11-29 08:00:46.980997618 +0000 UTC m=+3285.871358731" lastFinishedPulling="2025-11-29 08:00:56.36015476 +0000 UTC m=+3295.250515853" observedRunningTime="2025-11-29 08:00:57.117385676 +0000 UTC m=+3296.007746779" watchObservedRunningTime="2025-11-29 08:00:57.12207882 +0000 UTC m=+3296.012439923" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.174987 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29406721-q45c2"] Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.177006 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.189053 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406721-q45c2"] Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.321778 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-config-data\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.321848 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-combined-ca-bundle\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.321990 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-fernet-keys\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.322080 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jlk4\" (UniqueName: \"kubernetes.io/projected/bb66de5b-b040-4821-bf22-c234630fa81e-kube-api-access-2jlk4\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.424158 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-config-data\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.424227 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-combined-ca-bundle\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.424277 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-fernet-keys\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.424386 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jlk4\" (UniqueName: \"kubernetes.io/projected/bb66de5b-b040-4821-bf22-c234630fa81e-kube-api-access-2jlk4\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.441287 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-combined-ca-bundle\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.441379 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-fernet-keys\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.441629 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-config-data\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.446391 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jlk4\" (UniqueName: \"kubernetes.io/projected/bb66de5b-b040-4821-bf22-c234630fa81e-kube-api-access-2jlk4\") pod \"keystone-cron-29406721-q45c2\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:00 crc kubenswrapper[4731]: I1129 08:01:00.498953 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:01 crc kubenswrapper[4731]: I1129 08:01:01.029305 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406721-q45c2"] Nov 29 08:01:01 crc kubenswrapper[4731]: I1129 08:01:01.132506 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-q45c2" event={"ID":"bb66de5b-b040-4821-bf22-c234630fa81e","Type":"ContainerStarted","Data":"3c35734597776cf8e05e5a6b4a772772ccfd9ac0dce2034e1c51e5d012c5ac0d"} Nov 29 08:01:02 crc kubenswrapper[4731]: I1129 08:01:02.144008 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-q45c2" event={"ID":"bb66de5b-b040-4821-bf22-c234630fa81e","Type":"ContainerStarted","Data":"6ef54ef61ec72871ff4ef5d3231536ea4f1fdf85bb1a4a72387cc2923c92b8ec"} Nov 29 08:01:02 crc kubenswrapper[4731]: I1129 08:01:02.169401 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29406721-q45c2" podStartSLOduration=2.169378658 podStartE2EDuration="2.169378658s" podCreationTimestamp="2025-11-29 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:01:02.162442509 +0000 UTC m=+3301.052803622" watchObservedRunningTime="2025-11-29 08:01:02.169378658 +0000 UTC m=+3301.059739761" Nov 29 08:01:04 crc kubenswrapper[4731]: I1129 08:01:04.166793 4731 generic.go:334] "Generic (PLEG): container finished" podID="bb66de5b-b040-4821-bf22-c234630fa81e" containerID="6ef54ef61ec72871ff4ef5d3231536ea4f1fdf85bb1a4a72387cc2923c92b8ec" exitCode=0 Nov 29 08:01:04 crc kubenswrapper[4731]: I1129 08:01:04.166892 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-q45c2" event={"ID":"bb66de5b-b040-4821-bf22-c234630fa81e","Type":"ContainerDied","Data":"6ef54ef61ec72871ff4ef5d3231536ea4f1fdf85bb1a4a72387cc2923c92b8ec"} Nov 29 08:01:04 crc kubenswrapper[4731]: I1129 08:01:04.690922 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:01:04 crc kubenswrapper[4731]: I1129 08:01:04.691735 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:01:04 crc kubenswrapper[4731]: I1129 08:01:04.739637 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.227946 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.295962 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rh9d"] Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.530830 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.637996 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jlk4\" (UniqueName: \"kubernetes.io/projected/bb66de5b-b040-4821-bf22-c234630fa81e-kube-api-access-2jlk4\") pod \"bb66de5b-b040-4821-bf22-c234630fa81e\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.638241 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-config-data\") pod \"bb66de5b-b040-4821-bf22-c234630fa81e\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.638304 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-fernet-keys\") pod \"bb66de5b-b040-4821-bf22-c234630fa81e\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.638544 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-combined-ca-bundle\") pod \"bb66de5b-b040-4821-bf22-c234630fa81e\" (UID: \"bb66de5b-b040-4821-bf22-c234630fa81e\") " Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.645489 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bb66de5b-b040-4821-bf22-c234630fa81e" (UID: "bb66de5b-b040-4821-bf22-c234630fa81e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.646504 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb66de5b-b040-4821-bf22-c234630fa81e-kube-api-access-2jlk4" (OuterVolumeSpecName: "kube-api-access-2jlk4") pod "bb66de5b-b040-4821-bf22-c234630fa81e" (UID: "bb66de5b-b040-4821-bf22-c234630fa81e"). InnerVolumeSpecName "kube-api-access-2jlk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.674488 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb66de5b-b040-4821-bf22-c234630fa81e" (UID: "bb66de5b-b040-4821-bf22-c234630fa81e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.692086 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-config-data" (OuterVolumeSpecName: "config-data") pod "bb66de5b-b040-4821-bf22-c234630fa81e" (UID: "bb66de5b-b040-4821-bf22-c234630fa81e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.741686 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.741731 4731 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.741742 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb66de5b-b040-4821-bf22-c234630fa81e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:05 crc kubenswrapper[4731]: I1129 08:01:05.741755 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jlk4\" (UniqueName: \"kubernetes.io/projected/bb66de5b-b040-4821-bf22-c234630fa81e-kube-api-access-2jlk4\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:06 crc kubenswrapper[4731]: I1129 08:01:06.186774 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-q45c2" event={"ID":"bb66de5b-b040-4821-bf22-c234630fa81e","Type":"ContainerDied","Data":"3c35734597776cf8e05e5a6b4a772772ccfd9ac0dce2034e1c51e5d012c5ac0d"} Nov 29 08:01:06 crc kubenswrapper[4731]: I1129 08:01:06.187112 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c35734597776cf8e05e5a6b4a772772ccfd9ac0dce2034e1c51e5d012c5ac0d" Nov 29 08:01:06 crc kubenswrapper[4731]: I1129 08:01:06.186826 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-q45c2" Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.196177 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8rh9d" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerName="registry-server" containerID="cri-o://55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058" gracePeriod=2 Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.656286 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.788736 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-utilities\") pod \"06b27e8f-1170-4a0a-bf45-028912f417aa\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.789028 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqb2\" (UniqueName: \"kubernetes.io/projected/06b27e8f-1170-4a0a-bf45-028912f417aa-kube-api-access-rmqb2\") pod \"06b27e8f-1170-4a0a-bf45-028912f417aa\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.789587 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-utilities" (OuterVolumeSpecName: "utilities") pod "06b27e8f-1170-4a0a-bf45-028912f417aa" (UID: "06b27e8f-1170-4a0a-bf45-028912f417aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.789983 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-catalog-content\") pod \"06b27e8f-1170-4a0a-bf45-028912f417aa\" (UID: \"06b27e8f-1170-4a0a-bf45-028912f417aa\") " Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.790612 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.794869 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b27e8f-1170-4a0a-bf45-028912f417aa-kube-api-access-rmqb2" (OuterVolumeSpecName: "kube-api-access-rmqb2") pod "06b27e8f-1170-4a0a-bf45-028912f417aa" (UID: "06b27e8f-1170-4a0a-bf45-028912f417aa"). InnerVolumeSpecName "kube-api-access-rmqb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.892636 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmqb2\" (UniqueName: \"kubernetes.io/projected/06b27e8f-1170-4a0a-bf45-028912f417aa-kube-api-access-rmqb2\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.906519 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06b27e8f-1170-4a0a-bf45-028912f417aa" (UID: "06b27e8f-1170-4a0a-bf45-028912f417aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4731]: I1129 08:01:07.994770 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b27e8f-1170-4a0a-bf45-028912f417aa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.208673 4731 generic.go:334] "Generic (PLEG): container finished" podID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerID="55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058" exitCode=0 Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.208775 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rh9d" event={"ID":"06b27e8f-1170-4a0a-bf45-028912f417aa","Type":"ContainerDied","Data":"55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058"} Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.209037 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rh9d" event={"ID":"06b27e8f-1170-4a0a-bf45-028912f417aa","Type":"ContainerDied","Data":"bc37b41ae534e08cfda5dab8d0f8d85ce2133619943998b7221bc1e85e19ce01"} Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.208803 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rh9d" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.209073 4731 scope.go:117] "RemoveContainer" containerID="55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.247682 4731 scope.go:117] "RemoveContainer" containerID="29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.253621 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rh9d"] Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.264122 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8rh9d"] Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.282534 4731 scope.go:117] "RemoveContainer" containerID="c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.342750 4731 scope.go:117] "RemoveContainer" containerID="55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058" Nov 29 08:01:08 crc kubenswrapper[4731]: E1129 08:01:08.343151 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058\": container with ID starting with 55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058 not found: ID does not exist" containerID="55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.343182 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058"} err="failed to get container status \"55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058\": rpc error: code = NotFound desc = could not find container \"55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058\": container with ID starting with 55ab3d2699e21e3b5436c3358b8bd8c996aba2961322c8105cfd019ca1d1d058 not found: ID does not exist" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.343204 4731 scope.go:117] "RemoveContainer" containerID="29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef" Nov 29 08:01:08 crc kubenswrapper[4731]: E1129 08:01:08.343716 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef\": container with ID starting with 29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef not found: ID does not exist" containerID="29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.343769 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef"} err="failed to get container status \"29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef\": rpc error: code = NotFound desc = could not find container \"29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef\": container with ID starting with 29308c927e2353007abdc2eb8c16bcdf0f4a59c88d28241aa15f6a99532955ef not found: ID does not exist" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.343802 4731 scope.go:117] "RemoveContainer" containerID="c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88" Nov 29 08:01:08 crc kubenswrapper[4731]: E1129 08:01:08.344207 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88\": container with ID starting with c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88 not found: ID does not exist" containerID="c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88" Nov 29 08:01:08 crc kubenswrapper[4731]: I1129 08:01:08.344236 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88"} err="failed to get container status \"c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88\": rpc error: code = NotFound desc = could not find container \"c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88\": container with ID starting with c188c8343d8745198a0587df4c566426adb2c750308d249753ce5a89a8852b88 not found: ID does not exist" Nov 29 08:01:09 crc kubenswrapper[4731]: I1129 08:01:09.819097 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" path="/var/lib/kubelet/pods/06b27e8f-1170-4a0a-bf45-028912f417aa/volumes" Nov 29 08:02:03 crc kubenswrapper[4731]: I1129 08:02:03.002968 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:02:03 crc kubenswrapper[4731]: I1129 08:02:03.003810 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.399777 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kwxkx"] Nov 29 08:02:11 crc kubenswrapper[4731]: E1129 08:02:11.400888 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerName="extract-content" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.400913 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerName="extract-content" Nov 29 08:02:11 crc kubenswrapper[4731]: E1129 08:02:11.400958 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerName="registry-server" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.400973 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerName="registry-server" Nov 29 08:02:11 crc kubenswrapper[4731]: E1129 08:02:11.401004 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb66de5b-b040-4821-bf22-c234630fa81e" containerName="keystone-cron" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.401017 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb66de5b-b040-4821-bf22-c234630fa81e" containerName="keystone-cron" Nov 29 08:02:11 crc kubenswrapper[4731]: E1129 08:02:11.401061 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerName="extract-utilities" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.401073 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerName="extract-utilities" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.401384 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb66de5b-b040-4821-bf22-c234630fa81e" containerName="keystone-cron" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.401414 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b27e8f-1170-4a0a-bf45-028912f417aa" containerName="registry-server" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.415499 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.433459 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwxkx"] Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.515812 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-catalog-content\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.515917 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-utilities\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.515978 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85l4z\" (UniqueName: \"kubernetes.io/projected/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-kube-api-access-85l4z\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.617992 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-catalog-content\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.618090 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-utilities\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.618154 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85l4z\" (UniqueName: \"kubernetes.io/projected/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-kube-api-access-85l4z\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.618468 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-catalog-content\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.618804 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-utilities\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.652975 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85l4z\" (UniqueName: \"kubernetes.io/projected/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-kube-api-access-85l4z\") pod \"redhat-marketplace-kwxkx\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:11 crc kubenswrapper[4731]: I1129 08:02:11.750394 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:12 crc kubenswrapper[4731]: I1129 08:02:12.302714 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwxkx"] Nov 29 08:02:12 crc kubenswrapper[4731]: I1129 08:02:12.867383 4731 generic.go:334] "Generic (PLEG): container finished" podID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerID="f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2" exitCode=0 Nov 29 08:02:12 crc kubenswrapper[4731]: I1129 08:02:12.867452 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwxkx" event={"ID":"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2","Type":"ContainerDied","Data":"f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2"} Nov 29 08:02:12 crc kubenswrapper[4731]: I1129 08:02:12.867491 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwxkx" event={"ID":"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2","Type":"ContainerStarted","Data":"c16de22dc8ebf79ba16b5670fb106efa1347fc1eeda6ca740143b6fcffaf81af"} Nov 29 08:02:13 crc kubenswrapper[4731]: I1129 08:02:13.887387 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwxkx" event={"ID":"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2","Type":"ContainerStarted","Data":"6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade"} Nov 29 08:02:14 crc kubenswrapper[4731]: I1129 08:02:14.902173 4731 generic.go:334] "Generic (PLEG): container finished" podID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerID="6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade" exitCode=0 Nov 29 08:02:14 crc kubenswrapper[4731]: I1129 08:02:14.902239 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwxkx" event={"ID":"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2","Type":"ContainerDied","Data":"6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade"} Nov 29 08:02:15 crc kubenswrapper[4731]: I1129 08:02:15.916116 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwxkx" event={"ID":"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2","Type":"ContainerStarted","Data":"8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c"} Nov 29 08:02:15 crc kubenswrapper[4731]: I1129 08:02:15.951466 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kwxkx" podStartSLOduration=2.456145829 podStartE2EDuration="4.951436899s" podCreationTimestamp="2025-11-29 08:02:11 +0000 UTC" firstStartedPulling="2025-11-29 08:02:12.870438852 +0000 UTC m=+3371.760799955" lastFinishedPulling="2025-11-29 08:02:15.365729932 +0000 UTC m=+3374.256091025" observedRunningTime="2025-11-29 08:02:15.948449973 +0000 UTC m=+3374.838811076" watchObservedRunningTime="2025-11-29 08:02:15.951436899 +0000 UTC m=+3374.841798002" Nov 29 08:02:21 crc kubenswrapper[4731]: I1129 08:02:21.750598 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:21 crc kubenswrapper[4731]: I1129 08:02:21.751369 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:21 crc kubenswrapper[4731]: I1129 08:02:21.823391 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:22 crc kubenswrapper[4731]: I1129 08:02:22.022450 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:22 crc kubenswrapper[4731]: I1129 08:02:22.089348 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwxkx"] Nov 29 08:02:23 crc kubenswrapper[4731]: I1129 08:02:23.987307 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kwxkx" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerName="registry-server" containerID="cri-o://8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c" gracePeriod=2 Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.528343 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.619747 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85l4z\" (UniqueName: \"kubernetes.io/projected/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-kube-api-access-85l4z\") pod \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.619864 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-utilities\") pod \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.619919 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-catalog-content\") pod \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\" (UID: \"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2\") " Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.622066 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-utilities" (OuterVolumeSpecName: "utilities") pod "d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" (UID: "d60a6c0f-e98d-476f-a6a0-14c4f446c0c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.627081 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-kube-api-access-85l4z" (OuterVolumeSpecName: "kube-api-access-85l4z") pod "d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" (UID: "d60a6c0f-e98d-476f-a6a0-14c4f446c0c2"). InnerVolumeSpecName "kube-api-access-85l4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.640033 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" (UID: "d60a6c0f-e98d-476f-a6a0-14c4f446c0c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.722258 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85l4z\" (UniqueName: \"kubernetes.io/projected/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-kube-api-access-85l4z\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.722294 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.722306 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.997427 4731 generic.go:334] "Generic (PLEG): container finished" podID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerID="8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c" exitCode=0 Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.997534 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwxkx" Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.997525 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwxkx" event={"ID":"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2","Type":"ContainerDied","Data":"8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c"} Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.997889 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwxkx" event={"ID":"d60a6c0f-e98d-476f-a6a0-14c4f446c0c2","Type":"ContainerDied","Data":"c16de22dc8ebf79ba16b5670fb106efa1347fc1eeda6ca740143b6fcffaf81af"} Nov 29 08:02:24 crc kubenswrapper[4731]: I1129 08:02:24.997917 4731 scope.go:117] "RemoveContainer" containerID="8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.031857 4731 scope.go:117] "RemoveContainer" containerID="6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.036602 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwxkx"] Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.046236 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwxkx"] Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.055847 4731 scope.go:117] "RemoveContainer" containerID="f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.121207 4731 scope.go:117] "RemoveContainer" containerID="8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c" Nov 29 08:02:25 crc kubenswrapper[4731]: E1129 08:02:25.121715 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c\": container with ID starting with 8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c not found: ID does not exist" containerID="8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.121756 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c"} err="failed to get container status \"8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c\": rpc error: code = NotFound desc = could not find container \"8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c\": container with ID starting with 8d1a708c5083b6a3908050f8afdbefc2285337e109af1d74e6abda897c51134c not found: ID does not exist" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.121788 4731 scope.go:117] "RemoveContainer" containerID="6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade" Nov 29 08:02:25 crc kubenswrapper[4731]: E1129 08:02:25.122835 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade\": container with ID starting with 6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade not found: ID does not exist" containerID="6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.122894 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade"} err="failed to get container status \"6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade\": rpc error: code = NotFound desc = could not find container \"6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade\": container with ID starting with 6fd04a1cd0d19da2993bb25570d326fb8ad0f29ef897f97d70e4d7d631172ade not found: ID does not exist" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.122938 4731 scope.go:117] "RemoveContainer" containerID="f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2" Nov 29 08:02:25 crc kubenswrapper[4731]: E1129 08:02:25.123252 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2\": container with ID starting with f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2 not found: ID does not exist" containerID="f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.123280 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2"} err="failed to get container status \"f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2\": rpc error: code = NotFound desc = could not find container \"f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2\": container with ID starting with f1eb2b33084f9dd2c35421be0529af46d0956d994ac84fdde304dbea2c3b31b2 not found: ID does not exist" Nov 29 08:02:25 crc kubenswrapper[4731]: I1129 08:02:25.817989 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" path="/var/lib/kubelet/pods/d60a6c0f-e98d-476f-a6a0-14c4f446c0c2/volumes" Nov 29 08:02:33 crc kubenswrapper[4731]: I1129 08:02:33.002889 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:02:33 crc kubenswrapper[4731]: I1129 08:02:33.003379 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.002889 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.003438 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.003495 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.004429 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7a11895e241cfc66bf29bd6921c4df58deeaf89420e29adc1811493afd2519c"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.004481 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://c7a11895e241cfc66bf29bd6921c4df58deeaf89420e29adc1811493afd2519c" gracePeriod=600 Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.382660 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="c7a11895e241cfc66bf29bd6921c4df58deeaf89420e29adc1811493afd2519c" exitCode=0 Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.382748 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"c7a11895e241cfc66bf29bd6921c4df58deeaf89420e29adc1811493afd2519c"} Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.383050 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51"} Nov 29 08:03:03 crc kubenswrapper[4731]: I1129 08:03:03.383082 4731 scope.go:117] "RemoveContainer" containerID="a820b45e9ece2d10a7875ab0f15887682131ca0d435fd77c25457bfb1abeed7f" Nov 29 08:05:03 crc kubenswrapper[4731]: I1129 08:05:03.002258 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:05:03 crc kubenswrapper[4731]: I1129 08:05:03.002839 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:05:33 crc kubenswrapper[4731]: I1129 08:05:33.003062 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:05:33 crc kubenswrapper[4731]: I1129 08:05:33.003674 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.002964 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.003456 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.003502 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.004418 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.004476 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" gracePeriod=600 Nov 29 08:06:03 crc kubenswrapper[4731]: E1129 08:06:03.134610 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.207949 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" exitCode=0 Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.207989 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51"} Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.208368 4731 scope.go:117] "RemoveContainer" containerID="c7a11895e241cfc66bf29bd6921c4df58deeaf89420e29adc1811493afd2519c" Nov 29 08:06:03 crc kubenswrapper[4731]: I1129 08:06:03.209256 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:06:03 crc kubenswrapper[4731]: E1129 08:06:03.209589 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.626209 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rlkzr"] Nov 29 08:06:04 crc kubenswrapper[4731]: E1129 08:06:04.626906 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerName="registry-server" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.627918 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerName="registry-server" Nov 29 08:06:04 crc kubenswrapper[4731]: E1129 08:06:04.628138 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerName="extract-content" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.628152 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerName="extract-content" Nov 29 08:06:04 crc kubenswrapper[4731]: E1129 08:06:04.628195 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerName="extract-utilities" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.628203 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerName="extract-utilities" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.629701 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60a6c0f-e98d-476f-a6a0-14c4f446c0c2" containerName="registry-server" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.632642 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.639102 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlkzr"] Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.756850 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-utilities\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.757230 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpd8j\" (UniqueName: \"kubernetes.io/projected/66421d60-0239-47b1-9991-1c4bd06ff557-kube-api-access-kpd8j\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.757276 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-catalog-content\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.858755 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpd8j\" (UniqueName: \"kubernetes.io/projected/66421d60-0239-47b1-9991-1c4bd06ff557-kube-api-access-kpd8j\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.858833 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-catalog-content\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.858977 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-utilities\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.859696 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-utilities\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.859706 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-catalog-content\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.887240 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpd8j\" (UniqueName: \"kubernetes.io/projected/66421d60-0239-47b1-9991-1c4bd06ff557-kube-api-access-kpd8j\") pod \"certified-operators-rlkzr\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:04 crc kubenswrapper[4731]: I1129 08:06:04.962662 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:05 crc kubenswrapper[4731]: I1129 08:06:05.529996 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlkzr"] Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.244182 4731 generic.go:334] "Generic (PLEG): container finished" podID="66421d60-0239-47b1-9991-1c4bd06ff557" containerID="94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff" exitCode=0 Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.244294 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlkzr" event={"ID":"66421d60-0239-47b1-9991-1c4bd06ff557","Type":"ContainerDied","Data":"94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff"} Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.244590 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlkzr" event={"ID":"66421d60-0239-47b1-9991-1c4bd06ff557","Type":"ContainerStarted","Data":"c2f5c294af27550d68943a87960294987acb603340487990aee209ce7e24db1e"} Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.246798 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.810427 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f2cjx"] Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.813501 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.843445 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2cjx"] Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.901807 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a31044-c719-4f44-8b0b-9b5a680b695d-catalog-content\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.902280 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a31044-c719-4f44-8b0b-9b5a680b695d-utilities\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:06 crc kubenswrapper[4731]: I1129 08:06:06.902742 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg5cp\" (UniqueName: \"kubernetes.io/projected/19a31044-c719-4f44-8b0b-9b5a680b695d-kube-api-access-kg5cp\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:07 crc kubenswrapper[4731]: I1129 08:06:07.005136 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg5cp\" (UniqueName: \"kubernetes.io/projected/19a31044-c719-4f44-8b0b-9b5a680b695d-kube-api-access-kg5cp\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:07 crc kubenswrapper[4731]: I1129 08:06:07.005488 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a31044-c719-4f44-8b0b-9b5a680b695d-catalog-content\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:07 crc kubenswrapper[4731]: I1129 08:06:07.005648 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a31044-c719-4f44-8b0b-9b5a680b695d-utilities\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:07 crc kubenswrapper[4731]: I1129 08:06:07.006086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a31044-c719-4f44-8b0b-9b5a680b695d-catalog-content\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:07 crc kubenswrapper[4731]: I1129 08:06:07.006184 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a31044-c719-4f44-8b0b-9b5a680b695d-utilities\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:07 crc kubenswrapper[4731]: I1129 08:06:07.024103 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg5cp\" (UniqueName: \"kubernetes.io/projected/19a31044-c719-4f44-8b0b-9b5a680b695d-kube-api-access-kg5cp\") pod \"community-operators-f2cjx\" (UID: \"19a31044-c719-4f44-8b0b-9b5a680b695d\") " pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:07 crc kubenswrapper[4731]: I1129 08:06:07.154862 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:07 crc kubenswrapper[4731]: I1129 08:06:07.785165 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2cjx"] Nov 29 08:06:08 crc kubenswrapper[4731]: I1129 08:06:08.268560 4731 generic.go:334] "Generic (PLEG): container finished" podID="66421d60-0239-47b1-9991-1c4bd06ff557" containerID="82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385" exitCode=0 Nov 29 08:06:08 crc kubenswrapper[4731]: I1129 08:06:08.269162 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlkzr" event={"ID":"66421d60-0239-47b1-9991-1c4bd06ff557","Type":"ContainerDied","Data":"82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385"} Nov 29 08:06:08 crc kubenswrapper[4731]: I1129 08:06:08.271292 4731 generic.go:334] "Generic (PLEG): container finished" podID="19a31044-c719-4f44-8b0b-9b5a680b695d" containerID="fbd4da24f66535f483c2d11c4c3b3dbb883d4c045c8441fe609669dd2bea6961" exitCode=0 Nov 29 08:06:08 crc kubenswrapper[4731]: I1129 08:06:08.271371 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2cjx" event={"ID":"19a31044-c719-4f44-8b0b-9b5a680b695d","Type":"ContainerDied","Data":"fbd4da24f66535f483c2d11c4c3b3dbb883d4c045c8441fe609669dd2bea6961"} Nov 29 08:06:08 crc kubenswrapper[4731]: I1129 08:06:08.271429 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2cjx" event={"ID":"19a31044-c719-4f44-8b0b-9b5a680b695d","Type":"ContainerStarted","Data":"b37020a9b221b97c1bdd5e8bef73bbef2be54bd9d0906c8c668bb3971a481854"} Nov 29 08:06:09 crc kubenswrapper[4731]: I1129 08:06:09.288930 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlkzr" event={"ID":"66421d60-0239-47b1-9991-1c4bd06ff557","Type":"ContainerStarted","Data":"2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119"} Nov 29 08:06:12 crc kubenswrapper[4731]: I1129 08:06:12.318977 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2cjx" event={"ID":"19a31044-c719-4f44-8b0b-9b5a680b695d","Type":"ContainerStarted","Data":"16a94b32104f2db53956373f6b534f71f6ba4b03e91158e8748c979139f91e57"} Nov 29 08:06:12 crc kubenswrapper[4731]: I1129 08:06:12.343300 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rlkzr" podStartSLOduration=5.862645792 podStartE2EDuration="8.343277286s" podCreationTimestamp="2025-11-29 08:06:04 +0000 UTC" firstStartedPulling="2025-11-29 08:06:06.246472555 +0000 UTC m=+3605.136833648" lastFinishedPulling="2025-11-29 08:06:08.727104039 +0000 UTC m=+3607.617465142" observedRunningTime="2025-11-29 08:06:09.30788654 +0000 UTC m=+3608.198247643" watchObservedRunningTime="2025-11-29 08:06:12.343277286 +0000 UTC m=+3611.233638389" Nov 29 08:06:13 crc kubenswrapper[4731]: I1129 08:06:13.334076 4731 generic.go:334] "Generic (PLEG): container finished" podID="19a31044-c719-4f44-8b0b-9b5a680b695d" containerID="16a94b32104f2db53956373f6b534f71f6ba4b03e91158e8748c979139f91e57" exitCode=0 Nov 29 08:06:13 crc kubenswrapper[4731]: I1129 08:06:13.334187 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2cjx" event={"ID":"19a31044-c719-4f44-8b0b-9b5a680b695d","Type":"ContainerDied","Data":"16a94b32104f2db53956373f6b534f71f6ba4b03e91158e8748c979139f91e57"} Nov 29 08:06:14 crc kubenswrapper[4731]: I1129 08:06:14.346282 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2cjx" event={"ID":"19a31044-c719-4f44-8b0b-9b5a680b695d","Type":"ContainerStarted","Data":"b3b36de444243cc53ba120c6a0f9b226f0210396aca4e4256f30eaf012de6937"} Nov 29 08:06:14 crc kubenswrapper[4731]: I1129 08:06:14.377080 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f2cjx" podStartSLOduration=2.625005234 podStartE2EDuration="8.377057466s" podCreationTimestamp="2025-11-29 08:06:06 +0000 UTC" firstStartedPulling="2025-11-29 08:06:08.272845733 +0000 UTC m=+3607.163206836" lastFinishedPulling="2025-11-29 08:06:14.024897965 +0000 UTC m=+3612.915259068" observedRunningTime="2025-11-29 08:06:14.364554167 +0000 UTC m=+3613.254915280" watchObservedRunningTime="2025-11-29 08:06:14.377057466 +0000 UTC m=+3613.267418569" Nov 29 08:06:14 crc kubenswrapper[4731]: I1129 08:06:14.963342 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:14 crc kubenswrapper[4731]: I1129 08:06:14.963413 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:15 crc kubenswrapper[4731]: I1129 08:06:15.016937 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:15 crc kubenswrapper[4731]: I1129 08:06:15.406862 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:17 crc kubenswrapper[4731]: I1129 08:06:17.156252 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:17 crc kubenswrapper[4731]: I1129 08:06:17.156307 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:17 crc kubenswrapper[4731]: I1129 08:06:17.204225 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:17 crc kubenswrapper[4731]: I1129 08:06:17.399965 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlkzr"] Nov 29 08:06:17 crc kubenswrapper[4731]: I1129 08:06:17.400636 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rlkzr" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" containerName="registry-server" containerID="cri-o://2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119" gracePeriod=2 Nov 29 08:06:17 crc kubenswrapper[4731]: I1129 08:06:17.806552 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:06:17 crc kubenswrapper[4731]: E1129 08:06:17.806966 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:06:17 crc kubenswrapper[4731]: I1129 08:06:17.891416 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.074316 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-utilities\") pod \"66421d60-0239-47b1-9991-1c4bd06ff557\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.074617 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-catalog-content\") pod \"66421d60-0239-47b1-9991-1c4bd06ff557\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.074698 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpd8j\" (UniqueName: \"kubernetes.io/projected/66421d60-0239-47b1-9991-1c4bd06ff557-kube-api-access-kpd8j\") pod \"66421d60-0239-47b1-9991-1c4bd06ff557\" (UID: \"66421d60-0239-47b1-9991-1c4bd06ff557\") " Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.075043 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-utilities" (OuterVolumeSpecName: "utilities") pod "66421d60-0239-47b1-9991-1c4bd06ff557" (UID: "66421d60-0239-47b1-9991-1c4bd06ff557"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.075607 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.081270 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66421d60-0239-47b1-9991-1c4bd06ff557-kube-api-access-kpd8j" (OuterVolumeSpecName: "kube-api-access-kpd8j") pod "66421d60-0239-47b1-9991-1c4bd06ff557" (UID: "66421d60-0239-47b1-9991-1c4bd06ff557"). InnerVolumeSpecName "kube-api-access-kpd8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.125860 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66421d60-0239-47b1-9991-1c4bd06ff557" (UID: "66421d60-0239-47b1-9991-1c4bd06ff557"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.177772 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpd8j\" (UniqueName: \"kubernetes.io/projected/66421d60-0239-47b1-9991-1c4bd06ff557-kube-api-access-kpd8j\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.177813 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66421d60-0239-47b1-9991-1c4bd06ff557-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.389654 4731 generic.go:334] "Generic (PLEG): container finished" podID="66421d60-0239-47b1-9991-1c4bd06ff557" containerID="2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119" exitCode=0 Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.389773 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlkzr" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.390082 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlkzr" event={"ID":"66421d60-0239-47b1-9991-1c4bd06ff557","Type":"ContainerDied","Data":"2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119"} Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.390191 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlkzr" event={"ID":"66421d60-0239-47b1-9991-1c4bd06ff557","Type":"ContainerDied","Data":"c2f5c294af27550d68943a87960294987acb603340487990aee209ce7e24db1e"} Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.390262 4731 scope.go:117] "RemoveContainer" containerID="2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.415224 4731 scope.go:117] "RemoveContainer" containerID="82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.428913 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlkzr"] Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.439738 4731 scope.go:117] "RemoveContainer" containerID="94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.441157 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rlkzr"] Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.483620 4731 scope.go:117] "RemoveContainer" containerID="2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119" Nov 29 08:06:18 crc kubenswrapper[4731]: E1129 08:06:18.484283 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119\": container with ID starting with 2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119 not found: ID does not exist" containerID="2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.484334 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119"} err="failed to get container status \"2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119\": rpc error: code = NotFound desc = could not find container \"2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119\": container with ID starting with 2ff9e6b8ec17b51b8959bd289f1bd915adaee7e5ad9ed5142b70cfe018dd9119 not found: ID does not exist" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.484372 4731 scope.go:117] "RemoveContainer" containerID="82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385" Nov 29 08:06:18 crc kubenswrapper[4731]: E1129 08:06:18.484826 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385\": container with ID starting with 82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385 not found: ID does not exist" containerID="82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.484892 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385"} err="failed to get container status \"82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385\": rpc error: code = NotFound desc = could not find container \"82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385\": container with ID starting with 82b29aad9531205a1ec361a39a3fa957934f200f7a44e7a91e03fc96693b4385 not found: ID does not exist" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.484923 4731 scope.go:117] "RemoveContainer" containerID="94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff" Nov 29 08:06:18 crc kubenswrapper[4731]: E1129 08:06:18.485255 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff\": container with ID starting with 94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff not found: ID does not exist" containerID="94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff" Nov 29 08:06:18 crc kubenswrapper[4731]: I1129 08:06:18.485278 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff"} err="failed to get container status \"94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff\": rpc error: code = NotFound desc = could not find container \"94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff\": container with ID starting with 94920ac41a2321c98c51d1ad921266bdab670d051b37f73ae81585bd1f826cff not found: ID does not exist" Nov 29 08:06:19 crc kubenswrapper[4731]: I1129 08:06:19.818996 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" path="/var/lib/kubelet/pods/66421d60-0239-47b1-9991-1c4bd06ff557/volumes" Nov 29 08:06:27 crc kubenswrapper[4731]: I1129 08:06:27.227419 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f2cjx" Nov 29 08:06:27 crc kubenswrapper[4731]: I1129 08:06:27.301161 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2cjx"] Nov 29 08:06:27 crc kubenswrapper[4731]: I1129 08:06:27.337395 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shxb6"] Nov 29 08:06:27 crc kubenswrapper[4731]: I1129 08:06:27.338200 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-shxb6" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="registry-server" containerID="cri-o://9432ae2c19488d8d99c445ab27d505c1cd8f0680ce71fa51d4b9315dffd722c1" gracePeriod=2 Nov 29 08:06:27 crc kubenswrapper[4731]: I1129 08:06:27.485307 4731 generic.go:334] "Generic (PLEG): container finished" podID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerID="9432ae2c19488d8d99c445ab27d505c1cd8f0680ce71fa51d4b9315dffd722c1" exitCode=0 Nov 29 08:06:27 crc kubenswrapper[4731]: I1129 08:06:27.485923 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shxb6" event={"ID":"406d98ce-84d9-4d8a-8567-9b82123cf323","Type":"ContainerDied","Data":"9432ae2c19488d8d99c445ab27d505c1cd8f0680ce71fa51d4b9315dffd722c1"} Nov 29 08:06:27 crc kubenswrapper[4731]: I1129 08:06:27.933804 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shxb6" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.076816 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-utilities\") pod \"406d98ce-84d9-4d8a-8567-9b82123cf323\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.076868 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-catalog-content\") pod \"406d98ce-84d9-4d8a-8567-9b82123cf323\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.076944 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwc8w\" (UniqueName: \"kubernetes.io/projected/406d98ce-84d9-4d8a-8567-9b82123cf323-kube-api-access-pwc8w\") pod \"406d98ce-84d9-4d8a-8567-9b82123cf323\" (UID: \"406d98ce-84d9-4d8a-8567-9b82123cf323\") " Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.081773 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-utilities" (OuterVolumeSpecName: "utilities") pod "406d98ce-84d9-4d8a-8567-9b82123cf323" (UID: "406d98ce-84d9-4d8a-8567-9b82123cf323"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.095412 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406d98ce-84d9-4d8a-8567-9b82123cf323-kube-api-access-pwc8w" (OuterVolumeSpecName: "kube-api-access-pwc8w") pod "406d98ce-84d9-4d8a-8567-9b82123cf323" (UID: "406d98ce-84d9-4d8a-8567-9b82123cf323"). InnerVolumeSpecName "kube-api-access-pwc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.137076 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406d98ce-84d9-4d8a-8567-9b82123cf323" (UID: "406d98ce-84d9-4d8a-8567-9b82123cf323"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.184961 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.185035 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d98ce-84d9-4d8a-8567-9b82123cf323-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.185052 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwc8w\" (UniqueName: \"kubernetes.io/projected/406d98ce-84d9-4d8a-8567-9b82123cf323-kube-api-access-pwc8w\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.495901 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shxb6" event={"ID":"406d98ce-84d9-4d8a-8567-9b82123cf323","Type":"ContainerDied","Data":"cb8f1a06d48dcb711578eccc3aba145793ebc8ca8e78622f960b14cbfa781173"} Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.495954 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shxb6" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.496263 4731 scope.go:117] "RemoveContainer" containerID="9432ae2c19488d8d99c445ab27d505c1cd8f0680ce71fa51d4b9315dffd722c1" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.523261 4731 scope.go:117] "RemoveContainer" containerID="ba747959269a7c4ea43b793685c2c1691be8d066a59181d9f77d2aab6ccc6a33" Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.534262 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shxb6"] Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.540548 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-shxb6"] Nov 29 08:06:28 crc kubenswrapper[4731]: I1129 08:06:28.555444 4731 scope.go:117] "RemoveContainer" containerID="2d095e24465a47a557f6b0649ead14b72707bddf93bcc853f5d32f78892c782c" Nov 29 08:06:29 crc kubenswrapper[4731]: I1129 08:06:29.818828 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" path="/var/lib/kubelet/pods/406d98ce-84d9-4d8a-8567-9b82123cf323/volumes" Nov 29 08:06:31 crc kubenswrapper[4731]: I1129 08:06:31.814904 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:06:31 crc kubenswrapper[4731]: E1129 08:06:31.815789 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:06:42 crc kubenswrapper[4731]: I1129 08:06:42.807635 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:06:42 crc kubenswrapper[4731]: E1129 08:06:42.808535 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:06:54 crc kubenswrapper[4731]: I1129 08:06:54.807959 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:06:54 crc kubenswrapper[4731]: E1129 08:06:54.809242 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:07:08 crc kubenswrapper[4731]: I1129 08:07:08.806486 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:07:08 crc kubenswrapper[4731]: E1129 08:07:08.807386 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:07:21 crc kubenswrapper[4731]: I1129 08:07:21.814043 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:07:21 crc kubenswrapper[4731]: E1129 08:07:21.814785 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:07:34 crc kubenswrapper[4731]: I1129 08:07:34.807380 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:07:34 crc kubenswrapper[4731]: E1129 08:07:34.808134 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:07:46 crc kubenswrapper[4731]: I1129 08:07:46.807932 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:07:46 crc kubenswrapper[4731]: E1129 08:07:46.808736 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:07:58 crc kubenswrapper[4731]: I1129 08:07:58.370214 4731 generic.go:334] "Generic (PLEG): container finished" podID="a75de2e0-7593-49ac-bcf7-41705892c633" containerID="d40bb357639cec64226a0e845522fae24e142f391b1fc3539e63cd70f594f11f" exitCode=0 Nov 29 08:07:58 crc kubenswrapper[4731]: I1129 08:07:58.370307 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a75de2e0-7593-49ac-bcf7-41705892c633","Type":"ContainerDied","Data":"d40bb357639cec64226a0e845522fae24e142f391b1fc3539e63cd70f594f11f"} Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.795222 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.948685 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ca-certs\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.948893 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-temporary\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.948919 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.948958 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kkhh\" (UniqueName: \"kubernetes.io/projected/a75de2e0-7593-49ac-bcf7-41705892c633-kube-api-access-4kkhh\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.948986 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ssh-key\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.949001 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config-secret\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.949026 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.949054 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-workdir\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.949149 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-config-data\") pod \"a75de2e0-7593-49ac-bcf7-41705892c633\" (UID: \"a75de2e0-7593-49ac-bcf7-41705892c633\") " Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.950286 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-config-data" (OuterVolumeSpecName: "config-data") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.950462 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.957530 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.963750 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.963890 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75de2e0-7593-49ac-bcf7-41705892c633-kube-api-access-4kkhh" (OuterVolumeSpecName: "kube-api-access-4kkhh") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "kube-api-access-4kkhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.982974 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.983678 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4731]: I1129 08:07:59.998535 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.011003 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a75de2e0-7593-49ac-bcf7-41705892c633" (UID: "a75de2e0-7593-49ac-bcf7-41705892c633"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051073 4731 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051105 4731 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051143 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051154 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kkhh\" (UniqueName: \"kubernetes.io/projected/a75de2e0-7593-49ac-bcf7-41705892c633-kube-api-access-4kkhh\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051162 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051174 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051183 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051192 4731 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a75de2e0-7593-49ac-bcf7-41705892c633-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.051201 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a75de2e0-7593-49ac-bcf7-41705892c633-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.071704 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.152934 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.395161 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a75de2e0-7593-49ac-bcf7-41705892c633","Type":"ContainerDied","Data":"908ec4c1364df4c05b0de62ccd2b762beb8942a34414a48b7655ed618667a7b1"} Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.395215 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="908ec4c1364df4c05b0de62ccd2b762beb8942a34414a48b7655ed618667a7b1" Nov 29 08:08:00 crc kubenswrapper[4731]: I1129 08:08:00.395262 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:08:01 crc kubenswrapper[4731]: I1129 08:08:01.834606 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:08:01 crc kubenswrapper[4731]: E1129 08:08:01.835769 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.077986 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:08:11 crc kubenswrapper[4731]: E1129 08:08:11.079921 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="extract-content" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.080021 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="extract-content" Nov 29 08:08:11 crc kubenswrapper[4731]: E1129 08:08:11.080086 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" containerName="extract-content" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.080147 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" containerName="extract-content" Nov 29 08:08:11 crc kubenswrapper[4731]: E1129 08:08:11.080211 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="extract-utilities" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.080271 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="extract-utilities" Nov 29 08:08:11 crc kubenswrapper[4731]: E1129 08:08:11.080347 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" containerName="registry-server" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.080415 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" containerName="registry-server" Nov 29 08:08:11 crc kubenswrapper[4731]: E1129 08:08:11.080482 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="registry-server" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.080537 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="registry-server" Nov 29 08:08:11 crc kubenswrapper[4731]: E1129 08:08:11.080618 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75de2e0-7593-49ac-bcf7-41705892c633" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.080678 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75de2e0-7593-49ac-bcf7-41705892c633" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:08:11 crc kubenswrapper[4731]: E1129 08:08:11.080740 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" containerName="extract-utilities" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.080819 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" containerName="extract-utilities" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.081083 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="66421d60-0239-47b1-9991-1c4bd06ff557" containerName="registry-server" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.081171 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="406d98ce-84d9-4d8a-8567-9b82123cf323" containerName="registry-server" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.081235 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75de2e0-7593-49ac-bcf7-41705892c633" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.082057 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.085506 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-x22gz" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.086290 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.194638 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ffb1882c-64cb-477b-ba35-8159dc93cd30\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.194839 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh8rg\" (UniqueName: \"kubernetes.io/projected/ffb1882c-64cb-477b-ba35-8159dc93cd30-kube-api-access-qh8rg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ffb1882c-64cb-477b-ba35-8159dc93cd30\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.297130 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh8rg\" (UniqueName: \"kubernetes.io/projected/ffb1882c-64cb-477b-ba35-8159dc93cd30-kube-api-access-qh8rg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ffb1882c-64cb-477b-ba35-8159dc93cd30\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.297214 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ffb1882c-64cb-477b-ba35-8159dc93cd30\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.298522 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ffb1882c-64cb-477b-ba35-8159dc93cd30\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.318968 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh8rg\" (UniqueName: \"kubernetes.io/projected/ffb1882c-64cb-477b-ba35-8159dc93cd30-kube-api-access-qh8rg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ffb1882c-64cb-477b-ba35-8159dc93cd30\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.339069 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ffb1882c-64cb-477b-ba35-8159dc93cd30\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.408722 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:08:11 crc kubenswrapper[4731]: I1129 08:08:11.867265 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:08:12 crc kubenswrapper[4731]: I1129 08:08:12.543258 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ffb1882c-64cb-477b-ba35-8159dc93cd30","Type":"ContainerStarted","Data":"8802d0d2f5636c1ed456513db51b45787e04e8d4c9fbec988a8ec033b58d00bc"} Nov 29 08:08:13 crc kubenswrapper[4731]: I1129 08:08:13.807416 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:08:13 crc kubenswrapper[4731]: E1129 08:08:13.807808 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:08:28 crc kubenswrapper[4731]: I1129 08:08:28.807323 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:08:28 crc kubenswrapper[4731]: E1129 08:08:28.809853 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:08:37 crc kubenswrapper[4731]: E1129 08:08:37.432228 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3358434363/2\": happened during read: context canceled" image="quay.io/quay/busybox:latest" Nov 29 08:08:37 crc kubenswrapper[4731]: E1129 08:08:37.432935 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qh8rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(ffb1882c-64cb-477b-ba35-8159dc93cd30): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3358434363/2\": happened during read: context canceled" logger="UnhandledError" Nov 29 08:08:37 crc kubenswrapper[4731]: E1129 08:08:37.434266 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3358434363/2\\\": happened during read: context canceled\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="ffb1882c-64cb-477b-ba35-8159dc93cd30" Nov 29 08:08:37 crc kubenswrapper[4731]: E1129 08:08:37.821774 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="ffb1882c-64cb-477b-ba35-8159dc93cd30" Nov 29 08:08:42 crc kubenswrapper[4731]: I1129 08:08:42.808673 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:08:42 crc kubenswrapper[4731]: E1129 08:08:42.809535 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:08:55 crc kubenswrapper[4731]: I1129 08:08:55.806687 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:08:55 crc kubenswrapper[4731]: E1129 08:08:55.807488 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:09:08 crc kubenswrapper[4731]: I1129 08:09:08.806683 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:09:08 crc kubenswrapper[4731]: E1129 08:09:08.807501 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:09:20 crc kubenswrapper[4731]: I1129 08:09:20.808171 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:09:20 crc kubenswrapper[4731]: E1129 08:09:20.808964 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:09:21 crc kubenswrapper[4731]: E1129 08:09:21.808083 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1156961081/2\": happened during read: context canceled" image="quay.io/quay/busybox:latest" Nov 29 08:09:21 crc kubenswrapper[4731]: E1129 08:09:21.808345 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qh8rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(ffb1882c-64cb-477b-ba35-8159dc93cd30): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1156961081/2\": happened during read: context canceled" logger="UnhandledError" Nov 29 08:09:21 crc kubenswrapper[4731]: E1129 08:09:21.810227 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1156961081/2\\\": happened during read: context canceled\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="ffb1882c-64cb-477b-ba35-8159dc93cd30" Nov 29 08:09:31 crc kubenswrapper[4731]: I1129 08:09:31.812836 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:09:31 crc kubenswrapper[4731]: E1129 08:09:31.813640 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:09:36 crc kubenswrapper[4731]: E1129 08:09:36.809099 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="ffb1882c-64cb-477b-ba35-8159dc93cd30" Nov 29 08:09:43 crc kubenswrapper[4731]: I1129 08:09:43.806835 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:09:43 crc kubenswrapper[4731]: E1129 08:09:43.807629 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:09:55 crc kubenswrapper[4731]: I1129 08:09:55.807017 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:09:55 crc kubenswrapper[4731]: E1129 08:09:55.807795 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:10:08 crc kubenswrapper[4731]: I1129 08:10:08.807424 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:10:08 crc kubenswrapper[4731]: E1129 08:10:08.808329 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:10:20 crc kubenswrapper[4731]: I1129 08:10:20.807379 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:10:20 crc kubenswrapper[4731]: E1129 08:10:20.808106 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:10:34 crc kubenswrapper[4731]: I1129 08:10:34.807908 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:10:34 crc kubenswrapper[4731]: E1129 08:10:34.808902 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:10:43 crc kubenswrapper[4731]: E1129 08:10:43.292864 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1675832763/2\": happened during read: context canceled" image="quay.io/quay/busybox:latest" Nov 29 08:10:43 crc kubenswrapper[4731]: E1129 08:10:43.293680 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qh8rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(ffb1882c-64cb-477b-ba35-8159dc93cd30): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1675832763/2\": happened during read: context canceled" logger="UnhandledError" Nov 29 08:10:43 crc kubenswrapper[4731]: E1129 08:10:43.294944 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1675832763/2\\\": happened during read: context canceled\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="ffb1882c-64cb-477b-ba35-8159dc93cd30" Nov 29 08:10:47 crc kubenswrapper[4731]: I1129 08:10:47.807737 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:10:47 crc kubenswrapper[4731]: E1129 08:10:47.809167 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.697445 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k2r5l/must-gather-z6cjf"] Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.700143 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.703121 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-k2r5l"/"default-dockercfg-k8vhl" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.703547 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k2r5l"/"openshift-service-ca.crt" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.703650 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k2r5l"/"kube-root-ca.crt" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.711452 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k2r5l/must-gather-z6cjf"] Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.803251 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppzm7\" (UniqueName: \"kubernetes.io/projected/b93e136b-6969-4394-ba9b-1ad5d10a9bed-kube-api-access-ppzm7\") pod \"must-gather-z6cjf\" (UID: \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\") " pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.803615 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93e136b-6969-4394-ba9b-1ad5d10a9bed-must-gather-output\") pod \"must-gather-z6cjf\" (UID: \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\") " pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.904357 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppzm7\" (UniqueName: \"kubernetes.io/projected/b93e136b-6969-4394-ba9b-1ad5d10a9bed-kube-api-access-ppzm7\") pod \"must-gather-z6cjf\" (UID: \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\") " pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.904678 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93e136b-6969-4394-ba9b-1ad5d10a9bed-must-gather-output\") pod \"must-gather-z6cjf\" (UID: \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\") " pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.905237 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93e136b-6969-4394-ba9b-1ad5d10a9bed-must-gather-output\") pod \"must-gather-z6cjf\" (UID: \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\") " pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:10:52 crc kubenswrapper[4731]: I1129 08:10:52.924382 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppzm7\" (UniqueName: \"kubernetes.io/projected/b93e136b-6969-4394-ba9b-1ad5d10a9bed-kube-api-access-ppzm7\") pod \"must-gather-z6cjf\" (UID: \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\") " pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:10:53 crc kubenswrapper[4731]: I1129 08:10:53.033537 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:10:53 crc kubenswrapper[4731]: I1129 08:10:53.497969 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k2r5l/must-gather-z6cjf"] Nov 29 08:10:54 crc kubenswrapper[4731]: I1129 08:10:54.207271 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" event={"ID":"b93e136b-6969-4394-ba9b-1ad5d10a9bed","Type":"ContainerStarted","Data":"953d67148dd35219c1e5d52adcc043fdad7e83da7b2055d8f026b799a3d60149"} Nov 29 08:10:57 crc kubenswrapper[4731]: E1129 08:10:57.810359 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="ffb1882c-64cb-477b-ba35-8159dc93cd30" Nov 29 08:10:58 crc kubenswrapper[4731]: I1129 08:10:58.252348 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" event={"ID":"b93e136b-6969-4394-ba9b-1ad5d10a9bed","Type":"ContainerStarted","Data":"1fc794e5456f86b8cb87d66e0f979fc3dab3ccf623c8e086a3afb89ebd6ea360"} Nov 29 08:10:58 crc kubenswrapper[4731]: I1129 08:10:58.252734 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" event={"ID":"b93e136b-6969-4394-ba9b-1ad5d10a9bed","Type":"ContainerStarted","Data":"9a8872f125f45702e34f6c1d602eca97d3088ba91ca6ff999b66a76e8bd6de55"} Nov 29 08:10:59 crc kubenswrapper[4731]: I1129 08:10:59.282792 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" podStartSLOduration=3.259349367 podStartE2EDuration="7.282764429s" podCreationTimestamp="2025-11-29 08:10:52 +0000 UTC" firstStartedPulling="2025-11-29 08:10:53.501092215 +0000 UTC m=+3892.391453318" lastFinishedPulling="2025-11-29 08:10:57.524507237 +0000 UTC m=+3896.414868380" observedRunningTime="2025-11-29 08:10:59.277174639 +0000 UTC m=+3898.167535762" watchObservedRunningTime="2025-11-29 08:10:59.282764429 +0000 UTC m=+3898.173125532" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.380424 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-8snzg"] Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.382083 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.495423 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n475v\" (UniqueName: \"kubernetes.io/projected/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-kube-api-access-n475v\") pod \"crc-debug-8snzg\" (UID: \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\") " pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.495761 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-host\") pod \"crc-debug-8snzg\" (UID: \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\") " pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.598122 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-host\") pod \"crc-debug-8snzg\" (UID: \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\") " pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.598276 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-host\") pod \"crc-debug-8snzg\" (UID: \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\") " pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.598920 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n475v\" (UniqueName: \"kubernetes.io/projected/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-kube-api-access-n475v\") pod \"crc-debug-8snzg\" (UID: \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\") " pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.625624 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n475v\" (UniqueName: \"kubernetes.io/projected/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-kube-api-access-n475v\") pod \"crc-debug-8snzg\" (UID: \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\") " pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.701654 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:01 crc kubenswrapper[4731]: I1129 08:11:01.808470 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:11:01 crc kubenswrapper[4731]: E1129 08:11:01.809425 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:11:02 crc kubenswrapper[4731]: I1129 08:11:02.291694 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/crc-debug-8snzg" event={"ID":"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f","Type":"ContainerStarted","Data":"74cf7cf79316bcc5131d06f45976bdb911b4a08a3a53f1dd440a53671624407b"} Nov 29 08:11:09 crc kubenswrapper[4731]: E1129 08:11:09.810308 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="ffb1882c-64cb-477b-ba35-8159dc93cd30" Nov 29 08:11:14 crc kubenswrapper[4731]: I1129 08:11:14.445091 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/crc-debug-8snzg" event={"ID":"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f","Type":"ContainerStarted","Data":"f08c9dd9b5ed63769aff51927b41f7fdc1de8b332fd29db2cfa5eac0accb7c52"} Nov 29 08:11:14 crc kubenswrapper[4731]: I1129 08:11:14.482304 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k2r5l/crc-debug-8snzg" podStartSLOduration=2.54450261 podStartE2EDuration="13.482286739s" podCreationTimestamp="2025-11-29 08:11:01 +0000 UTC" firstStartedPulling="2025-11-29 08:11:01.755466644 +0000 UTC m=+3900.645827747" lastFinishedPulling="2025-11-29 08:11:12.693250773 +0000 UTC m=+3911.583611876" observedRunningTime="2025-11-29 08:11:14.466803394 +0000 UTC m=+3913.357164497" watchObservedRunningTime="2025-11-29 08:11:14.482286739 +0000 UTC m=+3913.372647842" Nov 29 08:11:14 crc kubenswrapper[4731]: I1129 08:11:14.807944 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:11:15 crc kubenswrapper[4731]: I1129 08:11:15.464466 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"607d6adc71fd03ad8796c2f2c18f0bffcc7e369862c2d387eb5552ab82f9242f"} Nov 29 08:11:24 crc kubenswrapper[4731]: I1129 08:11:24.809481 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:11:26 crc kubenswrapper[4731]: I1129 08:11:26.568057 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ffb1882c-64cb-477b-ba35-8159dc93cd30","Type":"ContainerStarted","Data":"24a8e8563b532aeca50feb921b57037cec3f32372b652c16f4c4bb6987b39184"} Nov 29 08:11:26 crc kubenswrapper[4731]: I1129 08:11:26.591276 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.494318147 podStartE2EDuration="3m15.591251138s" podCreationTimestamp="2025-11-29 08:08:11 +0000 UTC" firstStartedPulling="2025-11-29 08:08:11.880378722 +0000 UTC m=+3730.770739825" lastFinishedPulling="2025-11-29 08:11:25.977311713 +0000 UTC m=+3924.867672816" observedRunningTime="2025-11-29 08:11:26.587257543 +0000 UTC m=+3925.477618646" watchObservedRunningTime="2025-11-29 08:11:26.591251138 +0000 UTC m=+3925.481612241" Nov 29 08:11:52 crc kubenswrapper[4731]: I1129 08:11:52.911672 4731 generic.go:334] "Generic (PLEG): container finished" podID="e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f" containerID="f08c9dd9b5ed63769aff51927b41f7fdc1de8b332fd29db2cfa5eac0accb7c52" exitCode=0 Nov 29 08:11:52 crc kubenswrapper[4731]: I1129 08:11:52.911765 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/crc-debug-8snzg" event={"ID":"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f","Type":"ContainerDied","Data":"f08c9dd9b5ed63769aff51927b41f7fdc1de8b332fd29db2cfa5eac0accb7c52"} Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.048459 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.075841 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-8snzg"] Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.083527 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-8snzg"] Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.166962 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-host\") pod \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\" (UID: \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\") " Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.167136 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-host" (OuterVolumeSpecName: "host") pod "e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f" (UID: "e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.167647 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n475v\" (UniqueName: \"kubernetes.io/projected/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-kube-api-access-n475v\") pod \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\" (UID: \"e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f\") " Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.168009 4731 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.175247 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-kube-api-access-n475v" (OuterVolumeSpecName: "kube-api-access-n475v") pod "e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f" (UID: "e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f"). InnerVolumeSpecName "kube-api-access-n475v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.269833 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n475v\" (UniqueName: \"kubernetes.io/projected/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f-kube-api-access-n475v\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.936590 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cf7cf79316bcc5131d06f45976bdb911b4a08a3a53f1dd440a53671624407b" Nov 29 08:11:54 crc kubenswrapper[4731]: I1129 08:11:54.936642 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-8snzg" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.267151 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-p6hb7"] Nov 29 08:11:55 crc kubenswrapper[4731]: E1129 08:11:55.267590 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f" containerName="container-00" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.267602 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f" containerName="container-00" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.267797 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f" containerName="container-00" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.268431 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.390429 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5xs8\" (UniqueName: \"kubernetes.io/projected/e46198d7-64b1-40f2-9d56-378652e9d300-kube-api-access-m5xs8\") pod \"crc-debug-p6hb7\" (UID: \"e46198d7-64b1-40f2-9d56-378652e9d300\") " pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.391199 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e46198d7-64b1-40f2-9d56-378652e9d300-host\") pod \"crc-debug-p6hb7\" (UID: \"e46198d7-64b1-40f2-9d56-378652e9d300\") " pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.493545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5xs8\" (UniqueName: \"kubernetes.io/projected/e46198d7-64b1-40f2-9d56-378652e9d300-kube-api-access-m5xs8\") pod \"crc-debug-p6hb7\" (UID: \"e46198d7-64b1-40f2-9d56-378652e9d300\") " pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.493813 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e46198d7-64b1-40f2-9d56-378652e9d300-host\") pod \"crc-debug-p6hb7\" (UID: \"e46198d7-64b1-40f2-9d56-378652e9d300\") " pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.494032 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e46198d7-64b1-40f2-9d56-378652e9d300-host\") pod \"crc-debug-p6hb7\" (UID: \"e46198d7-64b1-40f2-9d56-378652e9d300\") " pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.520630 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5xs8\" (UniqueName: \"kubernetes.io/projected/e46198d7-64b1-40f2-9d56-378652e9d300-kube-api-access-m5xs8\") pod \"crc-debug-p6hb7\" (UID: \"e46198d7-64b1-40f2-9d56-378652e9d300\") " pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.587626 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.817943 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f" path="/var/lib/kubelet/pods/e59a46f7-81e3-47ad-a2fd-3ceba0ca7a5f/volumes" Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.951283 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" event={"ID":"e46198d7-64b1-40f2-9d56-378652e9d300","Type":"ContainerStarted","Data":"5dd956e8eeb2eb805c098e2a4147b222085641877b13844357668b3b339ef735"} Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.952897 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" event={"ID":"e46198d7-64b1-40f2-9d56-378652e9d300","Type":"ContainerStarted","Data":"223a6c176c89de4b8d0f1fb73a9580fbdffaf0318dbe148c1975c4aec7654fad"} Nov 29 08:11:55 crc kubenswrapper[4731]: I1129 08:11:55.973506 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" podStartSLOduration=0.973481523 podStartE2EDuration="973.481523ms" podCreationTimestamp="2025-11-29 08:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:11:55.96571047 +0000 UTC m=+3954.856071583" watchObservedRunningTime="2025-11-29 08:11:55.973481523 +0000 UTC m=+3954.863842626" Nov 29 08:11:56 crc kubenswrapper[4731]: I1129 08:11:56.964597 4731 generic.go:334] "Generic (PLEG): container finished" podID="e46198d7-64b1-40f2-9d56-378652e9d300" containerID="5dd956e8eeb2eb805c098e2a4147b222085641877b13844357668b3b339ef735" exitCode=0 Nov 29 08:11:56 crc kubenswrapper[4731]: I1129 08:11:56.964654 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" event={"ID":"e46198d7-64b1-40f2-9d56-378652e9d300","Type":"ContainerDied","Data":"5dd956e8eeb2eb805c098e2a4147b222085641877b13844357668b3b339ef735"} Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.071327 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.104185 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-p6hb7"] Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.112592 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-p6hb7"] Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.258677 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5xs8\" (UniqueName: \"kubernetes.io/projected/e46198d7-64b1-40f2-9d56-378652e9d300-kube-api-access-m5xs8\") pod \"e46198d7-64b1-40f2-9d56-378652e9d300\" (UID: \"e46198d7-64b1-40f2-9d56-378652e9d300\") " Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.258854 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e46198d7-64b1-40f2-9d56-378652e9d300-host\") pod \"e46198d7-64b1-40f2-9d56-378652e9d300\" (UID: \"e46198d7-64b1-40f2-9d56-378652e9d300\") " Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.259088 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e46198d7-64b1-40f2-9d56-378652e9d300-host" (OuterVolumeSpecName: "host") pod "e46198d7-64b1-40f2-9d56-378652e9d300" (UID: "e46198d7-64b1-40f2-9d56-378652e9d300"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.259737 4731 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e46198d7-64b1-40f2-9d56-378652e9d300-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.264148 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e46198d7-64b1-40f2-9d56-378652e9d300-kube-api-access-m5xs8" (OuterVolumeSpecName: "kube-api-access-m5xs8") pod "e46198d7-64b1-40f2-9d56-378652e9d300" (UID: "e46198d7-64b1-40f2-9d56-378652e9d300"). InnerVolumeSpecName "kube-api-access-m5xs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.361709 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5xs8\" (UniqueName: \"kubernetes.io/projected/e46198d7-64b1-40f2-9d56-378652e9d300-kube-api-access-m5xs8\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.987661 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="223a6c176c89de4b8d0f1fb73a9580fbdffaf0318dbe148c1975c4aec7654fad" Nov 29 08:11:58 crc kubenswrapper[4731]: I1129 08:11:58.987746 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-p6hb7" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.327148 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-tgxss"] Nov 29 08:11:59 crc kubenswrapper[4731]: E1129 08:11:59.328014 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46198d7-64b1-40f2-9d56-378652e9d300" containerName="container-00" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.328034 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46198d7-64b1-40f2-9d56-378652e9d300" containerName="container-00" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.328341 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e46198d7-64b1-40f2-9d56-378652e9d300" containerName="container-00" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.329267 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.486307 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-host\") pod \"crc-debug-tgxss\" (UID: \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\") " pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.486617 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sls8x\" (UniqueName: \"kubernetes.io/projected/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-kube-api-access-sls8x\") pod \"crc-debug-tgxss\" (UID: \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\") " pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.588876 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-host\") pod \"crc-debug-tgxss\" (UID: \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\") " pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.588976 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sls8x\" (UniqueName: \"kubernetes.io/projected/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-kube-api-access-sls8x\") pod \"crc-debug-tgxss\" (UID: \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\") " pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.589097 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-host\") pod \"crc-debug-tgxss\" (UID: \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\") " pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.616306 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sls8x\" (UniqueName: \"kubernetes.io/projected/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-kube-api-access-sls8x\") pod \"crc-debug-tgxss\" (UID: \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\") " pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.656670 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:11:59 crc kubenswrapper[4731]: W1129 08:11:59.699533 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod325f8f26_1b3f_4ccd_b6f4_dbfa68125621.slice/crio-a0ea371944e75d134897b17ca00c4c621807d8eab541e03e9fdcbc91313d7f18 WatchSource:0}: Error finding container a0ea371944e75d134897b17ca00c4c621807d8eab541e03e9fdcbc91313d7f18: Status 404 returned error can't find the container with id a0ea371944e75d134897b17ca00c4c621807d8eab541e03e9fdcbc91313d7f18 Nov 29 08:11:59 crc kubenswrapper[4731]: I1129 08:11:59.818455 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e46198d7-64b1-40f2-9d56-378652e9d300" path="/var/lib/kubelet/pods/e46198d7-64b1-40f2-9d56-378652e9d300/volumes" Nov 29 08:12:00 crc kubenswrapper[4731]: I1129 08:12:00.009387 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/crc-debug-tgxss" event={"ID":"325f8f26-1b3f-4ccd-b6f4-dbfa68125621","Type":"ContainerStarted","Data":"b8053c95caefec838828d5c8e773bf0256416f3f3f1a2ba087de7d6f28d44628"} Nov 29 08:12:00 crc kubenswrapper[4731]: I1129 08:12:00.009743 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/crc-debug-tgxss" event={"ID":"325f8f26-1b3f-4ccd-b6f4-dbfa68125621","Type":"ContainerStarted","Data":"a0ea371944e75d134897b17ca00c4c621807d8eab541e03e9fdcbc91313d7f18"} Nov 29 08:12:00 crc kubenswrapper[4731]: I1129 08:12:00.056298 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-tgxss"] Nov 29 08:12:00 crc kubenswrapper[4731]: I1129 08:12:00.066193 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k2r5l/crc-debug-tgxss"] Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.021132 4731 generic.go:334] "Generic (PLEG): container finished" podID="325f8f26-1b3f-4ccd-b6f4-dbfa68125621" containerID="b8053c95caefec838828d5c8e773bf0256416f3f3f1a2ba087de7d6f28d44628" exitCode=0 Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.131530 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.320361 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sls8x\" (UniqueName: \"kubernetes.io/projected/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-kube-api-access-sls8x\") pod \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\" (UID: \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\") " Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.320608 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-host\") pod \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\" (UID: \"325f8f26-1b3f-4ccd-b6f4-dbfa68125621\") " Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.320936 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-host" (OuterVolumeSpecName: "host") pod "325f8f26-1b3f-4ccd-b6f4-dbfa68125621" (UID: "325f8f26-1b3f-4ccd-b6f4-dbfa68125621"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.321360 4731 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.333921 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-kube-api-access-sls8x" (OuterVolumeSpecName: "kube-api-access-sls8x") pod "325f8f26-1b3f-4ccd-b6f4-dbfa68125621" (UID: "325f8f26-1b3f-4ccd-b6f4-dbfa68125621"). InnerVolumeSpecName "kube-api-access-sls8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.423753 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sls8x\" (UniqueName: \"kubernetes.io/projected/325f8f26-1b3f-4ccd-b6f4-dbfa68125621-kube-api-access-sls8x\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:01 crc kubenswrapper[4731]: I1129 08:12:01.850740 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="325f8f26-1b3f-4ccd-b6f4-dbfa68125621" path="/var/lib/kubelet/pods/325f8f26-1b3f-4ccd-b6f4-dbfa68125621/volumes" Nov 29 08:12:02 crc kubenswrapper[4731]: I1129 08:12:02.031901 4731 scope.go:117] "RemoveContainer" containerID="b8053c95caefec838828d5c8e773bf0256416f3f3f1a2ba087de7d6f28d44628" Nov 29 08:12:02 crc kubenswrapper[4731]: I1129 08:12:02.031958 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/crc-debug-tgxss" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.515817 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7nksg"] Nov 29 08:12:05 crc kubenswrapper[4731]: E1129 08:12:05.516823 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325f8f26-1b3f-4ccd-b6f4-dbfa68125621" containerName="container-00" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.516845 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="325f8f26-1b3f-4ccd-b6f4-dbfa68125621" containerName="container-00" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.517114 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="325f8f26-1b3f-4ccd-b6f4-dbfa68125621" containerName="container-00" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.518908 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.530991 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nksg"] Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.706503 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-utilities\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.706656 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-catalog-content\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.706704 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8mk\" (UniqueName: \"kubernetes.io/projected/ab03601a-89d2-4595-9d4d-7c09d7522a8f-kube-api-access-8n8mk\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.807952 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-utilities\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.808050 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-catalog-content\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.808081 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8mk\" (UniqueName: \"kubernetes.io/projected/ab03601a-89d2-4595-9d4d-7c09d7522a8f-kube-api-access-8n8mk\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.808590 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-utilities\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.808595 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-catalog-content\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.833266 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8mk\" (UniqueName: \"kubernetes.io/projected/ab03601a-89d2-4595-9d4d-7c09d7522a8f-kube-api-access-8n8mk\") pod \"redhat-operators-7nksg\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:05 crc kubenswrapper[4731]: I1129 08:12:05.854423 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:06 crc kubenswrapper[4731]: I1129 08:12:06.331900 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nksg"] Nov 29 08:12:07 crc kubenswrapper[4731]: I1129 08:12:07.090724 4731 generic.go:334] "Generic (PLEG): container finished" podID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerID="b35635c40fbd88ec6642eb061b5ba4f66d93acfa11f90d01a7d0ead76a0a9ac5" exitCode=0 Nov 29 08:12:07 crc kubenswrapper[4731]: I1129 08:12:07.090819 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nksg" event={"ID":"ab03601a-89d2-4595-9d4d-7c09d7522a8f","Type":"ContainerDied","Data":"b35635c40fbd88ec6642eb061b5ba4f66d93acfa11f90d01a7d0ead76a0a9ac5"} Nov 29 08:12:07 crc kubenswrapper[4731]: I1129 08:12:07.091133 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nksg" event={"ID":"ab03601a-89d2-4595-9d4d-7c09d7522a8f","Type":"ContainerStarted","Data":"6fd5163b226b2921c70d5b0e6cc1b4bf6c8d64f34bbf412d0f1fa338dd23dcd8"} Nov 29 08:12:09 crc kubenswrapper[4731]: I1129 08:12:09.111194 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nksg" event={"ID":"ab03601a-89d2-4595-9d4d-7c09d7522a8f","Type":"ContainerStarted","Data":"e67da6619b29381bc57627af27249cbd6b775e060047ecef66d8fd98a82d9bbc"} Nov 29 08:12:11 crc kubenswrapper[4731]: I1129 08:12:11.132507 4731 generic.go:334] "Generic (PLEG): container finished" podID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerID="e67da6619b29381bc57627af27249cbd6b775e060047ecef66d8fd98a82d9bbc" exitCode=0 Nov 29 08:12:11 crc kubenswrapper[4731]: I1129 08:12:11.132605 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nksg" event={"ID":"ab03601a-89d2-4595-9d4d-7c09d7522a8f","Type":"ContainerDied","Data":"e67da6619b29381bc57627af27249cbd6b775e060047ecef66d8fd98a82d9bbc"} Nov 29 08:12:12 crc kubenswrapper[4731]: I1129 08:12:12.143900 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nksg" event={"ID":"ab03601a-89d2-4595-9d4d-7c09d7522a8f","Type":"ContainerStarted","Data":"7be0f267c47820b53e9bbd6afbeae1f6041e87aa12c97279f85115eefd0d1eca"} Nov 29 08:12:12 crc kubenswrapper[4731]: I1129 08:12:12.168470 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7nksg" podStartSLOduration=2.500741819 podStartE2EDuration="7.168421061s" podCreationTimestamp="2025-11-29 08:12:05 +0000 UTC" firstStartedPulling="2025-11-29 08:12:07.092783678 +0000 UTC m=+3965.983144781" lastFinishedPulling="2025-11-29 08:12:11.76046293 +0000 UTC m=+3970.650824023" observedRunningTime="2025-11-29 08:12:12.161162543 +0000 UTC m=+3971.051523656" watchObservedRunningTime="2025-11-29 08:12:12.168421061 +0000 UTC m=+3971.058782164" Nov 29 08:12:15 crc kubenswrapper[4731]: I1129 08:12:15.855440 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:15 crc kubenswrapper[4731]: I1129 08:12:15.856023 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:16 crc kubenswrapper[4731]: I1129 08:12:16.920237 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7nksg" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="registry-server" probeResult="failure" output=< Nov 29 08:12:16 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 29 08:12:16 crc kubenswrapper[4731]: > Nov 29 08:12:17 crc kubenswrapper[4731]: I1129 08:12:17.253922 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c9b78b974-grr5d_7f362df0-515f-4aa7-980b-8c418dadcc66/barbican-api/0.log" Nov 29 08:12:17 crc kubenswrapper[4731]: I1129 08:12:17.436603 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c9b78b974-grr5d_7f362df0-515f-4aa7-980b-8c418dadcc66/barbican-api-log/0.log" Nov 29 08:12:17 crc kubenswrapper[4731]: I1129 08:12:17.509763 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c78b8bc9d-8prwv_f9dcf660-e92e-44b6-b940-97d0cccdc187/barbican-keystone-listener/0.log" Nov 29 08:12:17 crc kubenswrapper[4731]: I1129 08:12:17.571729 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c78b8bc9d-8prwv_f9dcf660-e92e-44b6-b940-97d0cccdc187/barbican-keystone-listener-log/0.log" Nov 29 08:12:17 crc kubenswrapper[4731]: I1129 08:12:17.726037 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-64c66558f5-qcqwg_46e3b820-e4ea-46a6-9a98-944bf7718c56/barbican-worker/0.log" Nov 29 08:12:17 crc kubenswrapper[4731]: I1129 08:12:17.730473 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-64c66558f5-qcqwg_46e3b820-e4ea-46a6-9a98-944bf7718c56/barbican-worker-log/0.log" Nov 29 08:12:17 crc kubenswrapper[4731]: I1129 08:12:17.974663 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lqthj_20126f8e-6e2a-4035-862f-ab9c789511a0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.017344 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6556ead0-9306-43c4-bf74-52f688285fd5/ceilometer-central-agent/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.175153 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6556ead0-9306-43c4-bf74-52f688285fd5/ceilometer-notification-agent/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.273502 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6556ead0-9306-43c4-bf74-52f688285fd5/proxy-httpd/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.279694 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6556ead0-9306-43c4-bf74-52f688285fd5/sg-core/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.478790 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_585d388f-8639-4f82-815c-f500254f0169/cinder-api/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.521910 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_585d388f-8639-4f82-815c-f500254f0169/cinder-api-log/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.629235 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_15f82353-6105-4eb5-b791-dadbd7e2171f/cinder-scheduler/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.734807 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_15f82353-6105-4eb5-b791-dadbd7e2171f/probe/0.log" Nov 29 08:12:18 crc kubenswrapper[4731]: I1129 08:12:18.822874 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-9n48n_2ff0b9fa-bd65-4c7a-af24-2e4bd4ce5045/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:19 crc kubenswrapper[4731]: I1129 08:12:19.398161 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-k72xg_94d1ff36-d633-4055-be35-a5c572c64f68/init/0.log" Nov 29 08:12:19 crc kubenswrapper[4731]: I1129 08:12:19.416864 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-q9r4z_25edb0d1-a8a5-4577-9d0e-fb10ffc4bda5/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:19 crc kubenswrapper[4731]: I1129 08:12:19.600938 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-k72xg_94d1ff36-d633-4055-be35-a5c572c64f68/init/0.log" Nov 29 08:12:19 crc kubenswrapper[4731]: I1129 08:12:19.671990 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-k72xg_94d1ff36-d633-4055-be35-a5c572c64f68/dnsmasq-dns/0.log" Nov 29 08:12:19 crc kubenswrapper[4731]: I1129 08:12:19.726591 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-47scr_00ca821e-c39a-48c3-8318-2a09e190bdcf/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:19 crc kubenswrapper[4731]: I1129 08:12:19.927232 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_af63168b-e97f-4284-bdd4-d2547810144c/glance-log/0.log" Nov 29 08:12:19 crc kubenswrapper[4731]: I1129 08:12:19.964431 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_af63168b-e97f-4284-bdd4-d2547810144c/glance-httpd/0.log" Nov 29 08:12:20 crc kubenswrapper[4731]: I1129 08:12:20.112152 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_18bdebee-183b-4f16-a806-f6f6437424c4/glance-httpd/0.log" Nov 29 08:12:20 crc kubenswrapper[4731]: I1129 08:12:20.163129 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_18bdebee-183b-4f16-a806-f6f6437424c4/glance-log/0.log" Nov 29 08:12:20 crc kubenswrapper[4731]: I1129 08:12:20.295881 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5fcdbcfb48-gmbcm_3afcf821-ab23-4e13-96e7-2b178314bece/horizon/0.log" Nov 29 08:12:20 crc kubenswrapper[4731]: I1129 08:12:20.581108 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5fcdbcfb48-gmbcm_3afcf821-ab23-4e13-96e7-2b178314bece/horizon-log/0.log" Nov 29 08:12:20 crc kubenswrapper[4731]: I1129 08:12:20.697003 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-r9zqx_5e08a4ae-50ef-4cf9-97a8-bc09c1896afb/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:20 crc kubenswrapper[4731]: I1129 08:12:20.877829 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6lr56_3cf8ce99-01a6-4737-8f9f-c0cd0c47a8ae/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:21 crc kubenswrapper[4731]: I1129 08:12:21.361154 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74694f6999-x4dvv_2b8bf35f-55bb-445b-b99f-5a418577d482/keystone-api/0.log" Nov 29 08:12:21 crc kubenswrapper[4731]: I1129 08:12:21.408439 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406721-q45c2_bb66de5b-b040-4821-bf22-c234630fa81e/keystone-cron/0.log" Nov 29 08:12:21 crc kubenswrapper[4731]: I1129 08:12:21.597191 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_67ebcb38-078b-4f76-b700-e77cb1525f7d/kube-state-metrics/0.log" Nov 29 08:12:21 crc kubenswrapper[4731]: I1129 08:12:21.666927 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-f2lhd_d2581ba6-0d37-40f0-b458-e9e1d1071485/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:22 crc kubenswrapper[4731]: I1129 08:12:22.024258 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-558fbdd7b9-2w7vs_961096e3-fc62-4b26-a9de-1036f08b0fa0/neutron-httpd/0.log" Nov 29 08:12:22 crc kubenswrapper[4731]: I1129 08:12:22.159442 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-558fbdd7b9-2w7vs_961096e3-fc62-4b26-a9de-1036f08b0fa0/neutron-api/0.log" Nov 29 08:12:22 crc kubenswrapper[4731]: I1129 08:12:22.194788 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fbzsb_5d5a510b-d31d-4cf3-91dd-5e8c0066d6ed/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:22 crc kubenswrapper[4731]: I1129 08:12:22.707270 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_066ba538-6f57-4399-bf84-d4f2aa5c605b/nova-api-log/0.log" Nov 29 08:12:22 crc kubenswrapper[4731]: I1129 08:12:22.718860 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f76f9f40-3876-408d-80c6-46ae26b7c10a/nova-cell0-conductor-conductor/0.log" Nov 29 08:12:23 crc kubenswrapper[4731]: I1129 08:12:23.035211 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_162c1c6d-89a4-4eec-bfdb-dd972cd06f0e/nova-cell1-conductor-conductor/0.log" Nov 29 08:12:23 crc kubenswrapper[4731]: I1129 08:12:23.085781 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6cdcd6a9-b7ab-4fdd-8b5a-6ff557ef3d28/nova-cell1-novncproxy-novncproxy/0.log" Nov 29 08:12:23 crc kubenswrapper[4731]: I1129 08:12:23.125796 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_066ba538-6f57-4399-bf84-d4f2aa5c605b/nova-api-api/0.log" Nov 29 08:12:23 crc kubenswrapper[4731]: I1129 08:12:23.346709 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-ttnn5_6cd13760-b9b5-4fa6-ab05-773d91d97346/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:23 crc kubenswrapper[4731]: I1129 08:12:23.510098 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843/nova-metadata-log/0.log" Nov 29 08:12:23 crc kubenswrapper[4731]: I1129 08:12:23.879158 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a94d1b1c-5dfb-429f-ae00-3082948d94d7/nova-scheduler-scheduler/0.log" Nov 29 08:12:24 crc kubenswrapper[4731]: I1129 08:12:24.174893 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9f78b3-187a-4988-a15f-fd5b81e07ab4/mysql-bootstrap/0.log" Nov 29 08:12:24 crc kubenswrapper[4731]: I1129 08:12:24.400369 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9f78b3-187a-4988-a15f-fd5b81e07ab4/mysql-bootstrap/0.log" Nov 29 08:12:24 crc kubenswrapper[4731]: I1129 08:12:24.443467 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9f78b3-187a-4988-a15f-fd5b81e07ab4/galera/0.log" Nov 29 08:12:24 crc kubenswrapper[4731]: I1129 08:12:24.583021 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c3b26ece-7b12-4cd4-befd-4d42fa5b55fc/mysql-bootstrap/0.log" Nov 29 08:12:24 crc kubenswrapper[4731]: I1129 08:12:24.886582 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c3b26ece-7b12-4cd4-befd-4d42fa5b55fc/galera/0.log" Nov 29 08:12:24 crc kubenswrapper[4731]: I1129 08:12:24.895731 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c3b26ece-7b12-4cd4-befd-4d42fa5b55fc/mysql-bootstrap/0.log" Nov 29 08:12:24 crc kubenswrapper[4731]: I1129 08:12:24.949023 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2a4aa61c-0dc5-4284-aeb0-5dcc2da1b843/nova-metadata-metadata/0.log" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.088212 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f31d074a-cf1e-488e-9816-8cc25ab12d7f/openstackclient/0.log" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.253591 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hdf9m_3e584c0b-7ce0-45b8-b6a9-60ee16752970/ovn-controller/0.log" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.342328 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ltcz5_875e6ad8-8a38-4943-8a25-47761929dfc7/openstack-network-exporter/0.log" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.490998 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-slgbx_a63e01ef-be39-42d9-83e2-a4750d6eb8ba/ovsdb-server-init/0.log" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.793952 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-slgbx_a63e01ef-be39-42d9-83e2-a4750d6eb8ba/ovsdb-server/0.log" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.806409 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-slgbx_a63e01ef-be39-42d9-83e2-a4750d6eb8ba/ovsdb-server-init/0.log" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.866239 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-slgbx_a63e01ef-be39-42d9-83e2-a4750d6eb8ba/ovs-vswitchd/0.log" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.919473 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:25 crc kubenswrapper[4731]: I1129 08:12:25.997097 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.066600 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9dtdg_8999dce1-1af7-47d6-95cc-a19af53ce54a/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.125643 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_850b98c3-0079-4cae-a69a-1c0ee903ba53/openstack-network-exporter/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.170087 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nksg"] Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.218537 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_850b98c3-0079-4cae-a69a-1c0ee903ba53/ovn-northd/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.369687 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_380dda58-7342-44d2-a0e1-b4ac78363de8/openstack-network-exporter/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.439715 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_380dda58-7342-44d2-a0e1-b4ac78363de8/ovsdbserver-nb/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.617088 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f78aa165-6d13-419f-b13b-8382a111ded8/ovsdbserver-sb/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.673170 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f78aa165-6d13-419f-b13b-8382a111ded8/openstack-network-exporter/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.776330 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bdbcc6468-k4knd_db509226-a015-4c26-b8a8-80421cc7d661/placement-api/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.922914 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bdbcc6468-k4knd_db509226-a015-4c26-b8a8-80421cc7d661/placement-log/0.log" Nov 29 08:12:26 crc kubenswrapper[4731]: I1129 08:12:26.973781 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b079dae9-1f5d-4057-ae41-4273aaabeab8/setup-container/0.log" Nov 29 08:12:27 crc kubenswrapper[4731]: I1129 08:12:27.150544 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b079dae9-1f5d-4057-ae41-4273aaabeab8/rabbitmq/0.log" Nov 29 08:12:27 crc kubenswrapper[4731]: I1129 08:12:27.213994 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b079dae9-1f5d-4057-ae41-4273aaabeab8/setup-container/0.log" Nov 29 08:12:27 crc kubenswrapper[4731]: I1129 08:12:27.308325 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7nksg" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="registry-server" containerID="cri-o://7be0f267c47820b53e9bbd6afbeae1f6041e87aa12c97279f85115eefd0d1eca" gracePeriod=2 Nov 29 08:12:27 crc kubenswrapper[4731]: I1129 08:12:27.341860 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_96683d18-3f61-486f-bc69-5a253f2538cc/setup-container/0.log" Nov 29 08:12:27 crc kubenswrapper[4731]: I1129 08:12:27.714847 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_96683d18-3f61-486f-bc69-5a253f2538cc/setup-container/0.log" Nov 29 08:12:27 crc kubenswrapper[4731]: I1129 08:12:27.722614 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_96683d18-3f61-486f-bc69-5a253f2538cc/rabbitmq/0.log" Nov 29 08:12:27 crc kubenswrapper[4731]: I1129 08:12:27.775063 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rx8mb_350ab6c4-0e67-42b4-8f98-ee4c319198e6/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:27 crc kubenswrapper[4731]: I1129 08:12:27.985051 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2tjcs_6552a695-5be9-443d-a962-95ac029df99a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:28 crc kubenswrapper[4731]: I1129 08:12:28.243309 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mcsvw_98cb2e73-615e-483e-bd99-7a86354f29a0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:28 crc kubenswrapper[4731]: I1129 08:12:28.333965 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-c2dpb_33ad76bc-c3c7-47e2-9c32-77dd670cf832/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:28 crc kubenswrapper[4731]: I1129 08:12:28.351834 4731 generic.go:334] "Generic (PLEG): container finished" podID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerID="7be0f267c47820b53e9bbd6afbeae1f6041e87aa12c97279f85115eefd0d1eca" exitCode=0 Nov 29 08:12:28 crc kubenswrapper[4731]: I1129 08:12:28.352105 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nksg" event={"ID":"ab03601a-89d2-4595-9d4d-7c09d7522a8f","Type":"ContainerDied","Data":"7be0f267c47820b53e9bbd6afbeae1f6041e87aa12c97279f85115eefd0d1eca"} Nov 29 08:12:28 crc kubenswrapper[4731]: I1129 08:12:28.515501 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-lmbs9_6ade5882-f94b-4588-887c-5510346c10cc/ssh-known-hosts-edpm-deployment/0.log" Nov 29 08:12:28 crc kubenswrapper[4731]: I1129 08:12:28.671584 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-749fbbbcf-hcvbs_0703b6cb-649d-4744-a400-6b551fe79fc2/proxy-server/0.log" Nov 29 08:12:28 crc kubenswrapper[4731]: I1129 08:12:28.781724 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-749fbbbcf-hcvbs_0703b6cb-649d-4744-a400-6b551fe79fc2/proxy-httpd/0.log" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.285500 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-w9lrv_38241274-4656-4558-a456-29d74208d47d/swift-ring-rebalance/0.log" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.328187 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.369547 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/account-auditor/0.log" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.376744 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nksg" event={"ID":"ab03601a-89d2-4595-9d4d-7c09d7522a8f","Type":"ContainerDied","Data":"6fd5163b226b2921c70d5b0e6cc1b4bf6c8d64f34bbf412d0f1fa338dd23dcd8"} Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.376808 4731 scope.go:117] "RemoveContainer" containerID="7be0f267c47820b53e9bbd6afbeae1f6041e87aa12c97279f85115eefd0d1eca" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.377008 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nksg" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.413053 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n8mk\" (UniqueName: \"kubernetes.io/projected/ab03601a-89d2-4595-9d4d-7c09d7522a8f-kube-api-access-8n8mk\") pod \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.413093 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-catalog-content\") pod \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.413172 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-utilities\") pod \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\" (UID: \"ab03601a-89d2-4595-9d4d-7c09d7522a8f\") " Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.414610 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-utilities" (OuterVolumeSpecName: "utilities") pod "ab03601a-89d2-4595-9d4d-7c09d7522a8f" (UID: "ab03601a-89d2-4595-9d4d-7c09d7522a8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.427093 4731 scope.go:117] "RemoveContainer" containerID="e67da6619b29381bc57627af27249cbd6b775e060047ecef66d8fd98a82d9bbc" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.442124 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab03601a-89d2-4595-9d4d-7c09d7522a8f-kube-api-access-8n8mk" (OuterVolumeSpecName: "kube-api-access-8n8mk") pod "ab03601a-89d2-4595-9d4d-7c09d7522a8f" (UID: "ab03601a-89d2-4595-9d4d-7c09d7522a8f"). InnerVolumeSpecName "kube-api-access-8n8mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.508545 4731 scope.go:117] "RemoveContainer" containerID="b35635c40fbd88ec6642eb061b5ba4f66d93acfa11f90d01a7d0ead76a0a9ac5" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.531925 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n8mk\" (UniqueName: \"kubernetes.io/projected/ab03601a-89d2-4595-9d4d-7c09d7522a8f-kube-api-access-8n8mk\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.531977 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.565887 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab03601a-89d2-4595-9d4d-7c09d7522a8f" (UID: "ab03601a-89d2-4595-9d4d-7c09d7522a8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.613199 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/account-reaper/0.log" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.634901 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab03601a-89d2-4595-9d4d-7c09d7522a8f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.717783 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/account-replicator/0.log" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.728484 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nksg"] Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.758461 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7nksg"] Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.773130 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/account-server/0.log" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.790862 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/container-auditor/0.log" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.837933 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" path="/var/lib/kubelet/pods/ab03601a-89d2-4595-9d4d-7c09d7522a8f/volumes" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.953024 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/container-replicator/0.log" Nov 29 08:12:29 crc kubenswrapper[4731]: I1129 08:12:29.991691 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/container-updater/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.022967 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/container-server/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.126968 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/object-auditor/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.193059 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/object-expirer/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.246456 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/object-replicator/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.329588 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/object-server/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.381250 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/object-updater/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.462659 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/rsync/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.490645 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_739c0608-5471-42a6-b062-4355cd1894a0/swift-recon-cron/0.log" Nov 29 08:12:30 crc kubenswrapper[4731]: I1129 08:12:30.669101 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-6kgt2_7e587ad4-40e6-4719-a23b-ff5035f40152/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:31 crc kubenswrapper[4731]: I1129 08:12:31.543953 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a75de2e0-7593-49ac-bcf7-41705892c633/tempest-tests-tempest-tests-runner/0.log" Nov 29 08:12:31 crc kubenswrapper[4731]: I1129 08:12:31.583017 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ffb1882c-64cb-477b-ba35-8159dc93cd30/test-operator-logs-container/0.log" Nov 29 08:12:31 crc kubenswrapper[4731]: I1129 08:12:31.774419 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mztj7_75231e03-f059-43f8-8533-94035f23806f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:12:40 crc kubenswrapper[4731]: I1129 08:12:40.087335 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d220afda-dc32-49e3-9cae-b9270f077167/memcached/0.log" Nov 29 08:12:58 crc kubenswrapper[4731]: I1129 08:12:58.889380 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cdjtg_9e9c951b-fd4b-408d-a01c-0288201c0227/kube-rbac-proxy/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.004897 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cdjtg_9e9c951b-fd4b-408d-a01c-0288201c0227/manager/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.009789 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp_24b6e41d-3fa1-413b-b3f6-8897188e619c/util/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.163967 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp_24b6e41d-3fa1-413b-b3f6-8897188e619c/util/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.179672 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp_24b6e41d-3fa1-413b-b3f6-8897188e619c/pull/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.185942 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp_24b6e41d-3fa1-413b-b3f6-8897188e619c/pull/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.341556 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp_24b6e41d-3fa1-413b-b3f6-8897188e619c/util/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.354372 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp_24b6e41d-3fa1-413b-b3f6-8897188e619c/pull/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.355338 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bf526de097f3fdef33663a774437efdeb5911beea874985264a060f916httzp_24b6e41d-3fa1-413b-b3f6-8897188e619c/extract/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.526300 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-97dng_9292e72f-2b6c-4a88-9a75-e8f55cda383a/kube-rbac-proxy/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.575767 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-vftx4_9fef0c9a-6dd7-4034-99c0-68409ad7d697/kube-rbac-proxy/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.576528 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-97dng_9292e72f-2b6c-4a88-9a75-e8f55cda383a/manager/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.730479 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-vftx4_9fef0c9a-6dd7-4034-99c0-68409ad7d697/manager/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.766624 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-mc6kc_3d80a1f9-6d6a-41e1-acee-640ffc57a440/kube-rbac-proxy/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.860344 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-mc6kc_3d80a1f9-6d6a-41e1-acee-640ffc57a440/manager/0.log" Nov 29 08:12:59 crc kubenswrapper[4731]: I1129 08:12:59.999452 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-49fl6_a9cc0c44-f184-47aa-9f26-78375628a187/manager/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.026602 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-49fl6_a9cc0c44-f184-47aa-9f26-78375628a187/kube-rbac-proxy/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.149780 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-kmj72_a6c4aff1-120b-4136-851e-469ebfc6a9ea/kube-rbac-proxy/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.185934 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-kmj72_a6c4aff1-120b-4136-851e-469ebfc6a9ea/manager/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.233040 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-xs92w_77a02080-6b69-441d-a6a3-ac95c4c697fe/kube-rbac-proxy/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.425843 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-dlncb_a35b9e52-221d-4c25-82d9-46fdd8d6e5ea/kube-rbac-proxy/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.483412 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-xs92w_77a02080-6b69-441d-a6a3-ac95c4c697fe/manager/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.487684 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-dlncb_a35b9e52-221d-4c25-82d9-46fdd8d6e5ea/manager/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.655995 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-k89xw_eff57485-877d-4e3e-95a2-ffc9c5ac4f0b/kube-rbac-proxy/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.716012 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-k89xw_eff57485-877d-4e3e-95a2-ffc9c5ac4f0b/manager/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.845330 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-gc2xd_8b165a75-263b-42e0-9521-85bf1a15dcbf/kube-rbac-proxy/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.847937 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-gc2xd_8b165a75-263b-42e0-9521-85bf1a15dcbf/manager/0.log" Nov 29 08:13:00 crc kubenswrapper[4731]: I1129 08:13:00.935856 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-hdhqj_7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9/kube-rbac-proxy/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.040198 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-hdhqj_7664fa66-a9e5-4617-88c4-d4bdeb5f2ea9/manager/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.141137 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-kc7tq_8ed7bfa3-ce11-490d-80f4-acd9ca51f698/kube-rbac-proxy/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.207655 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-kc7tq_8ed7bfa3-ce11-490d-80f4-acd9ca51f698/manager/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.335320 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-xkgrl_573641b3-8529-4a47-a0f6-379f2838dc27/kube-rbac-proxy/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.425787 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-xkgrl_573641b3-8529-4a47-a0f6-379f2838dc27/manager/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.502681 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-6mxrn_99aed1ca-e7d9-409c-91fa-439e52342da8/manager/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.531346 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-6mxrn_99aed1ca-e7d9-409c-91fa-439e52342da8/kube-rbac-proxy/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.639265 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl_530e0034-afa1-42a5-ae59-1f8eeb34aef0/kube-rbac-proxy/0.log" Nov 29 08:13:01 crc kubenswrapper[4731]: I1129 08:13:01.683961 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd48hnnl_530e0034-afa1-42a5-ae59-1f8eeb34aef0/manager/0.log" Nov 29 08:13:02 crc kubenswrapper[4731]: I1129 08:13:02.056681 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d6594489c-kzcpd_ba39c5c8-559c-4ebb-a4bd-6dc55af61842/operator/0.log" Nov 29 08:13:02 crc kubenswrapper[4731]: I1129 08:13:02.080986 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-djhck_48e66f83-bb57-46d9-89a6-cba0ad5e5fc4/registry-server/0.log" Nov 29 08:13:02 crc kubenswrapper[4731]: I1129 08:13:02.308546 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jmtwc_a8ecb76b-3826-4e47-920c-e0d9e3c18e38/kube-rbac-proxy/0.log" Nov 29 08:13:02 crc kubenswrapper[4731]: I1129 08:13:02.385354 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jmtwc_a8ecb76b-3826-4e47-920c-e0d9e3c18e38/manager/0.log" Nov 29 08:13:02 crc kubenswrapper[4731]: I1129 08:13:02.596823 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-zhx77_f7c082ea-a878-4069-9c48-96d4210f909a/kube-rbac-proxy/0.log" Nov 29 08:13:02 crc kubenswrapper[4731]: I1129 08:13:02.637540 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-zhx77_f7c082ea-a878-4069-9c48-96d4210f909a/manager/0.log" Nov 29 08:13:02 crc kubenswrapper[4731]: I1129 08:13:02.751581 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9wkq4_c448f643-f2f4-403d-b235-24ac74755cdf/operator/0.log" Nov 29 08:13:02 crc kubenswrapper[4731]: I1129 08:13:02.863241 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-zvvpp_9c9a7893-7770-49ae-8a0f-44168941a55b/kube-rbac-proxy/0.log" Nov 29 08:13:03 crc kubenswrapper[4731]: I1129 08:13:03.006995 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-zvvpp_9c9a7893-7770-49ae-8a0f-44168941a55b/manager/0.log" Nov 29 08:13:03 crc kubenswrapper[4731]: I1129 08:13:03.035097 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-76c96f5dc5-hsjk8_fbd333a8-95e7-47dc-8b2c-9ea2154d6fb9/manager/0.log" Nov 29 08:13:03 crc kubenswrapper[4731]: I1129 08:13:03.142402 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c98454947-cq6kc_cf83d1d1-1d33-4905-ae31-038a7afbd230/kube-rbac-proxy/0.log" Nov 29 08:13:03 crc kubenswrapper[4731]: I1129 08:13:03.175082 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c98454947-cq6kc_cf83d1d1-1d33-4905-ae31-038a7afbd230/manager/0.log" Nov 29 08:13:03 crc kubenswrapper[4731]: I1129 08:13:03.216684 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fnkt5_5e2ef3fa-22be-4ac5-9cce-09227be5538b/kube-rbac-proxy/0.log" Nov 29 08:13:03 crc kubenswrapper[4731]: I1129 08:13:03.268324 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fnkt5_5e2ef3fa-22be-4ac5-9cce-09227be5538b/manager/0.log" Nov 29 08:13:03 crc kubenswrapper[4731]: I1129 08:13:03.352978 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-tbxj8_9d6f5aa5-06c2-4196-a217-68aa690b6e7f/kube-rbac-proxy/0.log" Nov 29 08:13:03 crc kubenswrapper[4731]: I1129 08:13:03.413436 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-tbxj8_9d6f5aa5-06c2-4196-a217-68aa690b6e7f/manager/0.log" Nov 29 08:13:22 crc kubenswrapper[4731]: I1129 08:13:22.831930 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm9bz_8d48db02-9081-4e36-a6db-caa659b1eeb9/control-plane-machine-set-operator/0.log" Nov 29 08:13:23 crc kubenswrapper[4731]: I1129 08:13:23.021326 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xstx4_ec651e57-2be1-4076-93f5-bcfa036b4624/kube-rbac-proxy/0.log" Nov 29 08:13:23 crc kubenswrapper[4731]: I1129 08:13:23.030845 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xstx4_ec651e57-2be1-4076-93f5-bcfa036b4624/machine-api-operator/0.log" Nov 29 08:13:33 crc kubenswrapper[4731]: I1129 08:13:33.002558 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:13:33 crc kubenswrapper[4731]: I1129 08:13:33.003266 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:13:35 crc kubenswrapper[4731]: I1129 08:13:35.587038 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jtr6t_46df25da-69e6-4ab6-b887-62892deeacfb/cert-manager-controller/0.log" Nov 29 08:13:35 crc kubenswrapper[4731]: I1129 08:13:35.769196 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-xrhpg_9454756e-d310-48a9-9617-2469139ec742/cert-manager-cainjector/0.log" Nov 29 08:13:35 crc kubenswrapper[4731]: I1129 08:13:35.818508 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-bkf2j_12a1a3de-47c9-481d-940e-02d320ee23f9/cert-manager-webhook/0.log" Nov 29 08:13:49 crc kubenswrapper[4731]: I1129 08:13:49.214239 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-zr4mp_d634c867-9935-4736-84e5-7abcad360e79/nmstate-console-plugin/0.log" Nov 29 08:13:49 crc kubenswrapper[4731]: I1129 08:13:49.627159 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qc6n5_b3978082-731c-497f-b541-8895cafd521b/nmstate-handler/0.log" Nov 29 08:13:49 crc kubenswrapper[4731]: I1129 08:13:49.643358 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-7rj8z_131ea2bb-55cd-4f14-aa33-7600dc569c3f/nmstate-metrics/0.log" Nov 29 08:13:49 crc kubenswrapper[4731]: I1129 08:13:49.661172 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-7rj8z_131ea2bb-55cd-4f14-aa33-7600dc569c3f/kube-rbac-proxy/0.log" Nov 29 08:13:49 crc kubenswrapper[4731]: I1129 08:13:49.830081 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-m2k54_b2960a18-bc73-4625-8853-b433d22cc0ee/nmstate-operator/0.log" Nov 29 08:13:49 crc kubenswrapper[4731]: I1129 08:13:49.902008 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-nlx7g_5f1f2d59-f67c-47aa-b66a-84b647b9f52a/nmstate-webhook/0.log" Nov 29 08:14:03 crc kubenswrapper[4731]: I1129 08:14:03.002514 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:14:03 crc kubenswrapper[4731]: I1129 08:14:03.003111 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:14:06 crc kubenswrapper[4731]: I1129 08:14:06.157966 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-6cjn7_ef219519-fac4-4496-a040-519702060736/kube-rbac-proxy/0.log" Nov 29 08:14:06 crc kubenswrapper[4731]: I1129 08:14:06.344548 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-6cjn7_ef219519-fac4-4496-a040-519702060736/controller/0.log" Nov 29 08:14:06 crc kubenswrapper[4731]: I1129 08:14:06.387099 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-frr-files/0.log" Nov 29 08:14:06 crc kubenswrapper[4731]: I1129 08:14:06.804007 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-frr-files/0.log" Nov 29 08:14:06 crc kubenswrapper[4731]: I1129 08:14:06.815120 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-reloader/0.log" Nov 29 08:14:06 crc kubenswrapper[4731]: I1129 08:14:06.828944 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-reloader/0.log" Nov 29 08:14:06 crc kubenswrapper[4731]: I1129 08:14:06.829703 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-metrics/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.000823 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-reloader/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.043648 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-metrics/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.048493 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-frr-files/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.054788 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-metrics/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.232864 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/controller/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.238848 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-frr-files/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.264925 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-reloader/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.286853 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/cp-metrics/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.415680 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/frr-metrics/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.463250 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/kube-rbac-proxy/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.536451 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/kube-rbac-proxy-frr/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.661236 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/reloader/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.775967 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-5hdrf_487416b3-29bc-4302-b40d-faf6c56a568f/frr-k8s-webhook-server/0.log" Nov 29 08:14:07 crc kubenswrapper[4731]: I1129 08:14:07.937970 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5cfcff49c6-4hw8h_52af2912-39d1-447f-b652-bf5afab67ce5/manager/0.log" Nov 29 08:14:08 crc kubenswrapper[4731]: I1129 08:14:08.108679 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-76559b7b9c-66rgq_71540c78-2cb5-4df0-b9be-a0224d7211f1/webhook-server/0.log" Nov 29 08:14:08 crc kubenswrapper[4731]: I1129 08:14:08.241627 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wnwx5_e0c889d5-42f1-4ac2-9a90-91b8e2414937/kube-rbac-proxy/0.log" Nov 29 08:14:08 crc kubenswrapper[4731]: I1129 08:14:08.706398 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wnwx5_e0c889d5-42f1-4ac2-9a90-91b8e2414937/speaker/0.log" Nov 29 08:14:08 crc kubenswrapper[4731]: I1129 08:14:08.861772 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bspvf_c55c52fa-65fc-45fd-b266-10af88f3cead/frr/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.051315 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b_e28aeb94-691b-4374-8a64-c8ea4831a139/util/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.252196 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b_e28aeb94-691b-4374-8a64-c8ea4831a139/util/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.255803 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b_e28aeb94-691b-4374-8a64-c8ea4831a139/pull/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.302947 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b_e28aeb94-691b-4374-8a64-c8ea4831a139/pull/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.436218 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b_e28aeb94-691b-4374-8a64-c8ea4831a139/util/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.449712 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b_e28aeb94-691b-4374-8a64-c8ea4831a139/extract/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.456464 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwnq7b_e28aeb94-691b-4374-8a64-c8ea4831a139/pull/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.631419 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk_b430eace-40ed-4d3d-ae05-481052d89eb8/util/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.791758 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk_b430eace-40ed-4d3d-ae05-481052d89eb8/util/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.794061 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk_b430eace-40ed-4d3d-ae05-481052d89eb8/pull/0.log" Nov 29 08:14:22 crc kubenswrapper[4731]: I1129 08:14:22.819412 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk_b430eace-40ed-4d3d-ae05-481052d89eb8/pull/0.log" Nov 29 08:14:23 crc kubenswrapper[4731]: I1129 08:14:23.010956 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk_b430eace-40ed-4d3d-ae05-481052d89eb8/util/0.log" Nov 29 08:14:23 crc kubenswrapper[4731]: I1129 08:14:23.012628 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk_b430eace-40ed-4d3d-ae05-481052d89eb8/pull/0.log" Nov 29 08:14:23 crc kubenswrapper[4731]: I1129 08:14:23.069033 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wlvnk_b430eace-40ed-4d3d-ae05-481052d89eb8/extract/0.log" Nov 29 08:14:23 crc kubenswrapper[4731]: I1129 08:14:23.197078 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fdtsp_e70753c3-39f0-4d12-aff8-c25213451bb5/extract-utilities/0.log" Nov 29 08:14:23 crc kubenswrapper[4731]: I1129 08:14:23.765805 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fdtsp_e70753c3-39f0-4d12-aff8-c25213451bb5/extract-utilities/0.log" Nov 29 08:14:23 crc kubenswrapper[4731]: I1129 08:14:23.780327 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fdtsp_e70753c3-39f0-4d12-aff8-c25213451bb5/extract-content/0.log" Nov 29 08:14:23 crc kubenswrapper[4731]: I1129 08:14:23.821017 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fdtsp_e70753c3-39f0-4d12-aff8-c25213451bb5/extract-content/0.log" Nov 29 08:14:23 crc kubenswrapper[4731]: I1129 08:14:23.969943 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fdtsp_e70753c3-39f0-4d12-aff8-c25213451bb5/extract-content/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.015615 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fdtsp_e70753c3-39f0-4d12-aff8-c25213451bb5/extract-utilities/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.250876 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f2cjx_19a31044-c719-4f44-8b0b-9b5a680b695d/extract-utilities/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.424951 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f2cjx_19a31044-c719-4f44-8b0b-9b5a680b695d/extract-content/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.477685 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f2cjx_19a31044-c719-4f44-8b0b-9b5a680b695d/extract-utilities/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.541843 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f2cjx_19a31044-c719-4f44-8b0b-9b5a680b695d/extract-content/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.573440 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fdtsp_e70753c3-39f0-4d12-aff8-c25213451bb5/registry-server/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.733395 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f2cjx_19a31044-c719-4f44-8b0b-9b5a680b695d/extract-utilities/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.840811 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f2cjx_19a31044-c719-4f44-8b0b-9b5a680b695d/extract-content/0.log" Nov 29 08:14:24 crc kubenswrapper[4731]: I1129 08:14:24.947487 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f2cjx_19a31044-c719-4f44-8b0b-9b5a680b695d/registry-server/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.350965 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-82hnv_4325e7fb-0543-4969-8ebb-c2dcf11cc24b/marketplace-operator/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.384657 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qj69f_8eb37715-795e-4a3a-89c3-caa27a8ad0fc/extract-utilities/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.594863 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qj69f_8eb37715-795e-4a3a-89c3-caa27a8ad0fc/extract-content/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.594961 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qj69f_8eb37715-795e-4a3a-89c3-caa27a8ad0fc/extract-utilities/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.600637 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qj69f_8eb37715-795e-4a3a-89c3-caa27a8ad0fc/extract-content/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.817651 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qj69f_8eb37715-795e-4a3a-89c3-caa27a8ad0fc/extract-content/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.818947 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qj69f_8eb37715-795e-4a3a-89c3-caa27a8ad0fc/extract-utilities/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.836809 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t7tn5_30728f36-1a1f-4d10-9a28-c50eca791478/extract-utilities/0.log" Nov 29 08:14:25 crc kubenswrapper[4731]: I1129 08:14:25.997741 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qj69f_8eb37715-795e-4a3a-89c3-caa27a8ad0fc/registry-server/0.log" Nov 29 08:14:26 crc kubenswrapper[4731]: I1129 08:14:26.110146 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t7tn5_30728f36-1a1f-4d10-9a28-c50eca791478/extract-content/0.log" Nov 29 08:14:26 crc kubenswrapper[4731]: I1129 08:14:26.115624 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t7tn5_30728f36-1a1f-4d10-9a28-c50eca791478/extract-content/0.log" Nov 29 08:14:26 crc kubenswrapper[4731]: I1129 08:14:26.130536 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t7tn5_30728f36-1a1f-4d10-9a28-c50eca791478/extract-utilities/0.log" Nov 29 08:14:26 crc kubenswrapper[4731]: I1129 08:14:26.306049 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t7tn5_30728f36-1a1f-4d10-9a28-c50eca791478/extract-utilities/0.log" Nov 29 08:14:26 crc kubenswrapper[4731]: I1129 08:14:26.322651 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t7tn5_30728f36-1a1f-4d10-9a28-c50eca791478/extract-content/0.log" Nov 29 08:14:26 crc kubenswrapper[4731]: I1129 08:14:26.847557 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t7tn5_30728f36-1a1f-4d10-9a28-c50eca791478/registry-server/0.log" Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.002666 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.005758 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.005829 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.006906 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"607d6adc71fd03ad8796c2f2c18f0bffcc7e369862c2d387eb5552ab82f9242f"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.006983 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://607d6adc71fd03ad8796c2f2c18f0bffcc7e369862c2d387eb5552ab82f9242f" gracePeriod=600 Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.623398 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="607d6adc71fd03ad8796c2f2c18f0bffcc7e369862c2d387eb5552ab82f9242f" exitCode=0 Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.623798 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"607d6adc71fd03ad8796c2f2c18f0bffcc7e369862c2d387eb5552ab82f9242f"} Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.623834 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerStarted","Data":"f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed"} Nov 29 08:14:33 crc kubenswrapper[4731]: I1129 08:14:33.623853 4731 scope.go:117] "RemoveContainer" containerID="f90654bd884e42905839b8345b957763d059421b4be9cf70d23e4e405dad6e51" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.186795 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85"] Nov 29 08:15:00 crc kubenswrapper[4731]: E1129 08:15:00.187738 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="extract-content" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.187754 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="extract-content" Nov 29 08:15:00 crc kubenswrapper[4731]: E1129 08:15:00.187770 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="extract-utilities" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.187778 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="extract-utilities" Nov 29 08:15:00 crc kubenswrapper[4731]: E1129 08:15:00.187804 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.187811 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.188026 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab03601a-89d2-4595-9d4d-7c09d7522a8f" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.188791 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.195194 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.195209 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.197582 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85"] Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.235190 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e98308d5-fcb6-43db-976b-aa670a31fab3-config-volume\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.235292 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e98308d5-fcb6-43db-976b-aa670a31fab3-secret-volume\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.235344 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwl4j\" (UniqueName: \"kubernetes.io/projected/e98308d5-fcb6-43db-976b-aa670a31fab3-kube-api-access-dwl4j\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.336601 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e98308d5-fcb6-43db-976b-aa670a31fab3-config-volume\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.336679 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e98308d5-fcb6-43db-976b-aa670a31fab3-secret-volume\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.337658 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e98308d5-fcb6-43db-976b-aa670a31fab3-config-volume\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.337908 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwl4j\" (UniqueName: \"kubernetes.io/projected/e98308d5-fcb6-43db-976b-aa670a31fab3-kube-api-access-dwl4j\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.359056 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e98308d5-fcb6-43db-976b-aa670a31fab3-secret-volume\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.364735 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwl4j\" (UniqueName: \"kubernetes.io/projected/e98308d5-fcb6-43db-976b-aa670a31fab3-kube-api-access-dwl4j\") pod \"collect-profiles-29406735-hdv85\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:00 crc kubenswrapper[4731]: I1129 08:15:00.513974 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:01 crc kubenswrapper[4731]: I1129 08:15:01.124797 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85"] Nov 29 08:15:01 crc kubenswrapper[4731]: I1129 08:15:01.881330 4731 generic.go:334] "Generic (PLEG): container finished" podID="e98308d5-fcb6-43db-976b-aa670a31fab3" containerID="7fdb9d10d5972697947d0238e063467c0861ef5433ecd6c34e084c12349bbabb" exitCode=0 Nov 29 08:15:01 crc kubenswrapper[4731]: I1129 08:15:01.881490 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" event={"ID":"e98308d5-fcb6-43db-976b-aa670a31fab3","Type":"ContainerDied","Data":"7fdb9d10d5972697947d0238e063467c0861ef5433ecd6c34e084c12349bbabb"} Nov 29 08:15:01 crc kubenswrapper[4731]: I1129 08:15:01.881656 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" event={"ID":"e98308d5-fcb6-43db-976b-aa670a31fab3","Type":"ContainerStarted","Data":"3d573657027c1f8d828092673a77411db09f956d09e74fc1556a76c8b5152475"} Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.268790 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.420956 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e98308d5-fcb6-43db-976b-aa670a31fab3-config-volume\") pod \"e98308d5-fcb6-43db-976b-aa670a31fab3\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.421443 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e98308d5-fcb6-43db-976b-aa670a31fab3-secret-volume\") pod \"e98308d5-fcb6-43db-976b-aa670a31fab3\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.421639 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwl4j\" (UniqueName: \"kubernetes.io/projected/e98308d5-fcb6-43db-976b-aa670a31fab3-kube-api-access-dwl4j\") pod \"e98308d5-fcb6-43db-976b-aa670a31fab3\" (UID: \"e98308d5-fcb6-43db-976b-aa670a31fab3\") " Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.421910 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e98308d5-fcb6-43db-976b-aa670a31fab3-config-volume" (OuterVolumeSpecName: "config-volume") pod "e98308d5-fcb6-43db-976b-aa670a31fab3" (UID: "e98308d5-fcb6-43db-976b-aa670a31fab3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.422162 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e98308d5-fcb6-43db-976b-aa670a31fab3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.427686 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98308d5-fcb6-43db-976b-aa670a31fab3-kube-api-access-dwl4j" (OuterVolumeSpecName: "kube-api-access-dwl4j") pod "e98308d5-fcb6-43db-976b-aa670a31fab3" (UID: "e98308d5-fcb6-43db-976b-aa670a31fab3"). InnerVolumeSpecName "kube-api-access-dwl4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.435437 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e98308d5-fcb6-43db-976b-aa670a31fab3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e98308d5-fcb6-43db-976b-aa670a31fab3" (UID: "e98308d5-fcb6-43db-976b-aa670a31fab3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.524550 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwl4j\" (UniqueName: \"kubernetes.io/projected/e98308d5-fcb6-43db-976b-aa670a31fab3-kube-api-access-dwl4j\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.524652 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e98308d5-fcb6-43db-976b-aa670a31fab3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.902266 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" event={"ID":"e98308d5-fcb6-43db-976b-aa670a31fab3","Type":"ContainerDied","Data":"3d573657027c1f8d828092673a77411db09f956d09e74fc1556a76c8b5152475"} Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.902329 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d573657027c1f8d828092673a77411db09f956d09e74fc1556a76c8b5152475" Nov 29 08:15:03 crc kubenswrapper[4731]: I1129 08:15:03.902378 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-hdv85" Nov 29 08:15:04 crc kubenswrapper[4731]: I1129 08:15:04.345799 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p"] Nov 29 08:15:04 crc kubenswrapper[4731]: I1129 08:15:04.353675 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-txp5p"] Nov 29 08:15:05 crc kubenswrapper[4731]: I1129 08:15:05.819899 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9900c053-e3a1-43bb-a13b-5e92ba495ed8" path="/var/lib/kubelet/pods/9900c053-e3a1-43bb-a13b-5e92ba495ed8/volumes" Nov 29 08:15:22 crc kubenswrapper[4731]: I1129 08:15:22.860275 4731 scope.go:117] "RemoveContainer" containerID="d14eeb7040af8c0985747625e6f09db2b3ba2d0f9fad9a06a771c9442ded7ffe" Nov 29 08:16:10 crc kubenswrapper[4731]: I1129 08:16:10.617015 4731 generic.go:334] "Generic (PLEG): container finished" podID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerID="9a8872f125f45702e34f6c1d602eca97d3088ba91ca6ff999b66a76e8bd6de55" exitCode=0 Nov 29 08:16:10 crc kubenswrapper[4731]: I1129 08:16:10.617137 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" event={"ID":"b93e136b-6969-4394-ba9b-1ad5d10a9bed","Type":"ContainerDied","Data":"9a8872f125f45702e34f6c1d602eca97d3088ba91ca6ff999b66a76e8bd6de55"} Nov 29 08:16:10 crc kubenswrapper[4731]: I1129 08:16:10.619324 4731 scope.go:117] "RemoveContainer" containerID="9a8872f125f45702e34f6c1d602eca97d3088ba91ca6ff999b66a76e8bd6de55" Nov 29 08:16:11 crc kubenswrapper[4731]: I1129 08:16:11.599660 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k2r5l_must-gather-z6cjf_b93e136b-6969-4394-ba9b-1ad5d10a9bed/gather/0.log" Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.398601 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k2r5l/must-gather-z6cjf"] Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.399799 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" podUID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerName="copy" containerID="cri-o://1fc794e5456f86b8cb87d66e0f979fc3dab3ccf623c8e086a3afb89ebd6ea360" gracePeriod=2 Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.407932 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k2r5l/must-gather-z6cjf"] Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.709999 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k2r5l_must-gather-z6cjf_b93e136b-6969-4394-ba9b-1ad5d10a9bed/copy/0.log" Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.710344 4731 generic.go:334] "Generic (PLEG): container finished" podID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerID="1fc794e5456f86b8cb87d66e0f979fc3dab3ccf623c8e086a3afb89ebd6ea360" exitCode=143 Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.904533 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k2r5l_must-gather-z6cjf_b93e136b-6969-4394-ba9b-1ad5d10a9bed/copy/0.log" Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.905246 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.995186 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93e136b-6969-4394-ba9b-1ad5d10a9bed-must-gather-output\") pod \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\" (UID: \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\") " Nov 29 08:16:19 crc kubenswrapper[4731]: I1129 08:16:19.995258 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppzm7\" (UniqueName: \"kubernetes.io/projected/b93e136b-6969-4394-ba9b-1ad5d10a9bed-kube-api-access-ppzm7\") pod \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\" (UID: \"b93e136b-6969-4394-ba9b-1ad5d10a9bed\") " Nov 29 08:16:20 crc kubenswrapper[4731]: I1129 08:16:20.007492 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93e136b-6969-4394-ba9b-1ad5d10a9bed-kube-api-access-ppzm7" (OuterVolumeSpecName: "kube-api-access-ppzm7") pod "b93e136b-6969-4394-ba9b-1ad5d10a9bed" (UID: "b93e136b-6969-4394-ba9b-1ad5d10a9bed"). InnerVolumeSpecName "kube-api-access-ppzm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:16:20 crc kubenswrapper[4731]: I1129 08:16:20.097620 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppzm7\" (UniqueName: \"kubernetes.io/projected/b93e136b-6969-4394-ba9b-1ad5d10a9bed-kube-api-access-ppzm7\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:20 crc kubenswrapper[4731]: I1129 08:16:20.160138 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93e136b-6969-4394-ba9b-1ad5d10a9bed-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b93e136b-6969-4394-ba9b-1ad5d10a9bed" (UID: "b93e136b-6969-4394-ba9b-1ad5d10a9bed"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:16:20 crc kubenswrapper[4731]: I1129 08:16:20.199188 4731 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93e136b-6969-4394-ba9b-1ad5d10a9bed-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:20 crc kubenswrapper[4731]: I1129 08:16:20.722113 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k2r5l_must-gather-z6cjf_b93e136b-6969-4394-ba9b-1ad5d10a9bed/copy/0.log" Nov 29 08:16:20 crc kubenswrapper[4731]: I1129 08:16:20.723379 4731 scope.go:117] "RemoveContainer" containerID="1fc794e5456f86b8cb87d66e0f979fc3dab3ccf623c8e086a3afb89ebd6ea360" Nov 29 08:16:20 crc kubenswrapper[4731]: I1129 08:16:20.723459 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k2r5l/must-gather-z6cjf" Nov 29 08:16:20 crc kubenswrapper[4731]: I1129 08:16:20.748583 4731 scope.go:117] "RemoveContainer" containerID="9a8872f125f45702e34f6c1d602eca97d3088ba91ca6ff999b66a76e8bd6de55" Nov 29 08:16:21 crc kubenswrapper[4731]: I1129 08:16:21.817818 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" path="/var/lib/kubelet/pods/b93e136b-6969-4394-ba9b-1ad5d10a9bed/volumes" Nov 29 08:16:33 crc kubenswrapper[4731]: I1129 08:16:33.003338 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:16:33 crc kubenswrapper[4731]: I1129 08:16:33.004438 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.293889 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h99jp"] Nov 29 08:17:02 crc kubenswrapper[4731]: E1129 08:17:02.294751 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98308d5-fcb6-43db-976b-aa670a31fab3" containerName="collect-profiles" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.294765 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98308d5-fcb6-43db-976b-aa670a31fab3" containerName="collect-profiles" Nov 29 08:17:02 crc kubenswrapper[4731]: E1129 08:17:02.294779 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerName="copy" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.294785 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerName="copy" Nov 29 08:17:02 crc kubenswrapper[4731]: E1129 08:17:02.294807 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerName="gather" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.294814 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerName="gather" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.294991 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98308d5-fcb6-43db-976b-aa670a31fab3" containerName="collect-profiles" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.295076 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerName="copy" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.295086 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93e136b-6969-4394-ba9b-1ad5d10a9bed" containerName="gather" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.296646 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.313553 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h99jp"] Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.483826 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-catalog-content\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.484141 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrdd\" (UniqueName: \"kubernetes.io/projected/46f3a082-ceaf-4422-8285-bc0670f9fa70-kube-api-access-dxrdd\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.484294 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-utilities\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.586742 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-catalog-content\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.586891 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrdd\" (UniqueName: \"kubernetes.io/projected/46f3a082-ceaf-4422-8285-bc0670f9fa70-kube-api-access-dxrdd\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.586936 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-utilities\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.587272 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-catalog-content\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.587327 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-utilities\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.612831 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrdd\" (UniqueName: \"kubernetes.io/projected/46f3a082-ceaf-4422-8285-bc0670f9fa70-kube-api-access-dxrdd\") pod \"certified-operators-h99jp\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:02 crc kubenswrapper[4731]: I1129 08:17:02.621885 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:03 crc kubenswrapper[4731]: I1129 08:17:03.003131 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:17:03 crc kubenswrapper[4731]: I1129 08:17:03.003490 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:17:03 crc kubenswrapper[4731]: I1129 08:17:03.136705 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h99jp"] Nov 29 08:17:03 crc kubenswrapper[4731]: W1129 08:17:03.144692 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f3a082_ceaf_4422_8285_bc0670f9fa70.slice/crio-63bad34326accae28ce51617180ec59766c0fc8e6ffdbb09a4c8380c8ee0ce14 WatchSource:0}: Error finding container 63bad34326accae28ce51617180ec59766c0fc8e6ffdbb09a4c8380c8ee0ce14: Status 404 returned error can't find the container with id 63bad34326accae28ce51617180ec59766c0fc8e6ffdbb09a4c8380c8ee0ce14 Nov 29 08:17:03 crc kubenswrapper[4731]: I1129 08:17:03.176213 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h99jp" event={"ID":"46f3a082-ceaf-4422-8285-bc0670f9fa70","Type":"ContainerStarted","Data":"63bad34326accae28ce51617180ec59766c0fc8e6ffdbb09a4c8380c8ee0ce14"} Nov 29 08:17:04 crc kubenswrapper[4731]: I1129 08:17:04.188318 4731 generic.go:334] "Generic (PLEG): container finished" podID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerID="7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743" exitCode=0 Nov 29 08:17:04 crc kubenswrapper[4731]: I1129 08:17:04.188425 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h99jp" event={"ID":"46f3a082-ceaf-4422-8285-bc0670f9fa70","Type":"ContainerDied","Data":"7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743"} Nov 29 08:17:04 crc kubenswrapper[4731]: I1129 08:17:04.191759 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:17:06 crc kubenswrapper[4731]: I1129 08:17:06.210386 4731 generic.go:334] "Generic (PLEG): container finished" podID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerID="73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e" exitCode=0 Nov 29 08:17:06 crc kubenswrapper[4731]: I1129 08:17:06.210631 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h99jp" event={"ID":"46f3a082-ceaf-4422-8285-bc0670f9fa70","Type":"ContainerDied","Data":"73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e"} Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.068917 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8ww9d"] Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.071699 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.082579 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-utilities\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.082662 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-catalog-content\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.082738 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5v7\" (UniqueName: \"kubernetes.io/projected/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-kube-api-access-6l5v7\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.083501 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ww9d"] Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.186916 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-utilities\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.186986 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-catalog-content\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.187056 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5v7\" (UniqueName: \"kubernetes.io/projected/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-kube-api-access-6l5v7\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.187955 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-utilities\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.188096 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-catalog-content\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.209625 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5v7\" (UniqueName: \"kubernetes.io/projected/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-kube-api-access-6l5v7\") pod \"community-operators-8ww9d\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.228181 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h99jp" event={"ID":"46f3a082-ceaf-4422-8285-bc0670f9fa70","Type":"ContainerStarted","Data":"87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc"} Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.253092 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h99jp" podStartSLOduration=2.5278322319999997 podStartE2EDuration="5.253066273s" podCreationTimestamp="2025-11-29 08:17:02 +0000 UTC" firstStartedPulling="2025-11-29 08:17:04.191493747 +0000 UTC m=+4263.081854850" lastFinishedPulling="2025-11-29 08:17:06.916727778 +0000 UTC m=+4265.807088891" observedRunningTime="2025-11-29 08:17:07.247283957 +0000 UTC m=+4266.137645080" watchObservedRunningTime="2025-11-29 08:17:07.253066273 +0000 UTC m=+4266.143427376" Nov 29 08:17:07 crc kubenswrapper[4731]: I1129 08:17:07.416727 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:08 crc kubenswrapper[4731]: I1129 08:17:08.005070 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ww9d"] Nov 29 08:17:08 crc kubenswrapper[4731]: I1129 08:17:08.237936 4731 generic.go:334] "Generic (PLEG): container finished" podID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerID="b7938ae4ea85f06a7ae1445158255e79762ec154bc62d82c9a4c390b53d638c1" exitCode=0 Nov 29 08:17:08 crc kubenswrapper[4731]: I1129 08:17:08.238808 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ww9d" event={"ID":"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7","Type":"ContainerDied","Data":"b7938ae4ea85f06a7ae1445158255e79762ec154bc62d82c9a4c390b53d638c1"} Nov 29 08:17:08 crc kubenswrapper[4731]: I1129 08:17:08.238860 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ww9d" event={"ID":"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7","Type":"ContainerStarted","Data":"83c5c05b33a1b0ceab7a38599b7e971995bb33de420f36e9dd46924563c66182"} Nov 29 08:17:09 crc kubenswrapper[4731]: I1129 08:17:09.249781 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ww9d" event={"ID":"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7","Type":"ContainerStarted","Data":"ec857ce83bd539847ee3878140ba088b8465b7e37a3806869e7492657bfe9f9e"} Nov 29 08:17:10 crc kubenswrapper[4731]: I1129 08:17:10.262183 4731 generic.go:334] "Generic (PLEG): container finished" podID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerID="ec857ce83bd539847ee3878140ba088b8465b7e37a3806869e7492657bfe9f9e" exitCode=0 Nov 29 08:17:10 crc kubenswrapper[4731]: I1129 08:17:10.262267 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ww9d" event={"ID":"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7","Type":"ContainerDied","Data":"ec857ce83bd539847ee3878140ba088b8465b7e37a3806869e7492657bfe9f9e"} Nov 29 08:17:11 crc kubenswrapper[4731]: I1129 08:17:11.273859 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ww9d" event={"ID":"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7","Type":"ContainerStarted","Data":"ad61814cd0f6c277959570c8cb266ad718480d74e853f92e5a95a2d5bc1ec7d8"} Nov 29 08:17:11 crc kubenswrapper[4731]: I1129 08:17:11.308286 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8ww9d" podStartSLOduration=1.5858844840000001 podStartE2EDuration="4.308257593s" podCreationTimestamp="2025-11-29 08:17:07 +0000 UTC" firstStartedPulling="2025-11-29 08:17:08.239926173 +0000 UTC m=+4267.130287276" lastFinishedPulling="2025-11-29 08:17:10.962299282 +0000 UTC m=+4269.852660385" observedRunningTime="2025-11-29 08:17:11.298456622 +0000 UTC m=+4270.188817725" watchObservedRunningTime="2025-11-29 08:17:11.308257593 +0000 UTC m=+4270.198618696" Nov 29 08:17:12 crc kubenswrapper[4731]: I1129 08:17:12.622212 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:12 crc kubenswrapper[4731]: I1129 08:17:12.622604 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:12 crc kubenswrapper[4731]: I1129 08:17:12.687880 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:13 crc kubenswrapper[4731]: I1129 08:17:13.335827 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:13 crc kubenswrapper[4731]: I1129 08:17:13.882923 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h99jp"] Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.310740 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h99jp" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerName="registry-server" containerID="cri-o://87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc" gracePeriod=2 Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.751683 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.859919 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxrdd\" (UniqueName: \"kubernetes.io/projected/46f3a082-ceaf-4422-8285-bc0670f9fa70-kube-api-access-dxrdd\") pod \"46f3a082-ceaf-4422-8285-bc0670f9fa70\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.860018 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-catalog-content\") pod \"46f3a082-ceaf-4422-8285-bc0670f9fa70\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.860195 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-utilities\") pod \"46f3a082-ceaf-4422-8285-bc0670f9fa70\" (UID: \"46f3a082-ceaf-4422-8285-bc0670f9fa70\") " Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.861281 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-utilities" (OuterVolumeSpecName: "utilities") pod "46f3a082-ceaf-4422-8285-bc0670f9fa70" (UID: "46f3a082-ceaf-4422-8285-bc0670f9fa70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.861854 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.868426 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f3a082-ceaf-4422-8285-bc0670f9fa70-kube-api-access-dxrdd" (OuterVolumeSpecName: "kube-api-access-dxrdd") pod "46f3a082-ceaf-4422-8285-bc0670f9fa70" (UID: "46f3a082-ceaf-4422-8285-bc0670f9fa70"). InnerVolumeSpecName "kube-api-access-dxrdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:17:15 crc kubenswrapper[4731]: I1129 08:17:15.964689 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxrdd\" (UniqueName: \"kubernetes.io/projected/46f3a082-ceaf-4422-8285-bc0670f9fa70-kube-api-access-dxrdd\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.052629 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46f3a082-ceaf-4422-8285-bc0670f9fa70" (UID: "46f3a082-ceaf-4422-8285-bc0670f9fa70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.065932 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46f3a082-ceaf-4422-8285-bc0670f9fa70-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.321677 4731 generic.go:334] "Generic (PLEG): container finished" podID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerID="87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc" exitCode=0 Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.321733 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h99jp" event={"ID":"46f3a082-ceaf-4422-8285-bc0670f9fa70","Type":"ContainerDied","Data":"87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc"} Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.321769 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h99jp" event={"ID":"46f3a082-ceaf-4422-8285-bc0670f9fa70","Type":"ContainerDied","Data":"63bad34326accae28ce51617180ec59766c0fc8e6ffdbb09a4c8380c8ee0ce14"} Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.321799 4731 scope.go:117] "RemoveContainer" containerID="87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.321970 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h99jp" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.341379 4731 scope.go:117] "RemoveContainer" containerID="73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.362697 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h99jp"] Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.374410 4731 scope.go:117] "RemoveContainer" containerID="7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.377014 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h99jp"] Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.418880 4731 scope.go:117] "RemoveContainer" containerID="87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc" Nov 29 08:17:16 crc kubenswrapper[4731]: E1129 08:17:16.419872 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc\": container with ID starting with 87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc not found: ID does not exist" containerID="87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.419915 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc"} err="failed to get container status \"87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc\": rpc error: code = NotFound desc = could not find container \"87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc\": container with ID starting with 87bdba50bef828b96d83845f36ea74050fd942755ed38533488e2732bca77ebc not found: ID does not exist" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.419946 4731 scope.go:117] "RemoveContainer" containerID="73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e" Nov 29 08:17:16 crc kubenswrapper[4731]: E1129 08:17:16.420719 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e\": container with ID starting with 73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e not found: ID does not exist" containerID="73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.420752 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e"} err="failed to get container status \"73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e\": rpc error: code = NotFound desc = could not find container \"73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e\": container with ID starting with 73b5a91eafc623a073d28c04be2bc433365f25785e25c8927693164ad992ee4e not found: ID does not exist" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.420775 4731 scope.go:117] "RemoveContainer" containerID="7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743" Nov 29 08:17:16 crc kubenswrapper[4731]: E1129 08:17:16.421316 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743\": container with ID starting with 7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743 not found: ID does not exist" containerID="7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743" Nov 29 08:17:16 crc kubenswrapper[4731]: I1129 08:17:16.421371 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743"} err="failed to get container status \"7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743\": rpc error: code = NotFound desc = could not find container \"7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743\": container with ID starting with 7eeba413afa414741f757ff3428058bd25d0dc37abca39d9ead43e6f55424743 not found: ID does not exist" Nov 29 08:17:17 crc kubenswrapper[4731]: I1129 08:17:17.417863 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:17 crc kubenswrapper[4731]: I1129 08:17:17.418210 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:17 crc kubenswrapper[4731]: I1129 08:17:17.482433 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:17 crc kubenswrapper[4731]: I1129 08:17:17.847916 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" path="/var/lib/kubelet/pods/46f3a082-ceaf-4422-8285-bc0670f9fa70/volumes" Nov 29 08:17:18 crc kubenswrapper[4731]: I1129 08:17:18.813825 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:19 crc kubenswrapper[4731]: I1129 08:17:19.253275 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ww9d"] Nov 29 08:17:20 crc kubenswrapper[4731]: I1129 08:17:20.360139 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8ww9d" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerName="registry-server" containerID="cri-o://ad61814cd0f6c277959570c8cb266ad718480d74e853f92e5a95a2d5bc1ec7d8" gracePeriod=2 Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.374824 4731 generic.go:334] "Generic (PLEG): container finished" podID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerID="ad61814cd0f6c277959570c8cb266ad718480d74e853f92e5a95a2d5bc1ec7d8" exitCode=0 Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.374913 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ww9d" event={"ID":"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7","Type":"ContainerDied","Data":"ad61814cd0f6c277959570c8cb266ad718480d74e853f92e5a95a2d5bc1ec7d8"} Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.740933 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.894380 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-catalog-content\") pod \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.894556 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-utilities\") pod \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.894672 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l5v7\" (UniqueName: \"kubernetes.io/projected/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-kube-api-access-6l5v7\") pod \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\" (UID: \"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7\") " Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.895407 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-utilities" (OuterVolumeSpecName: "utilities") pod "1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" (UID: "1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.905971 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-kube-api-access-6l5v7" (OuterVolumeSpecName: "kube-api-access-6l5v7") pod "1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" (UID: "1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7"). InnerVolumeSpecName "kube-api-access-6l5v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.948430 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" (UID: "1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.996719 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.996760 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:21 crc kubenswrapper[4731]: I1129 08:17:21.996771 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l5v7\" (UniqueName: \"kubernetes.io/projected/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7-kube-api-access-6l5v7\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:22 crc kubenswrapper[4731]: I1129 08:17:22.386490 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ww9d" event={"ID":"1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7","Type":"ContainerDied","Data":"83c5c05b33a1b0ceab7a38599b7e971995bb33de420f36e9dd46924563c66182"} Nov 29 08:17:22 crc kubenswrapper[4731]: I1129 08:17:22.386545 4731 scope.go:117] "RemoveContainer" containerID="ad61814cd0f6c277959570c8cb266ad718480d74e853f92e5a95a2d5bc1ec7d8" Nov 29 08:17:22 crc kubenswrapper[4731]: I1129 08:17:22.386761 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ww9d" Nov 29 08:17:22 crc kubenswrapper[4731]: I1129 08:17:22.417186 4731 scope.go:117] "RemoveContainer" containerID="ec857ce83bd539847ee3878140ba088b8465b7e37a3806869e7492657bfe9f9e" Nov 29 08:17:22 crc kubenswrapper[4731]: I1129 08:17:22.456313 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ww9d"] Nov 29 08:17:22 crc kubenswrapper[4731]: I1129 08:17:22.459232 4731 scope.go:117] "RemoveContainer" containerID="b7938ae4ea85f06a7ae1445158255e79762ec154bc62d82c9a4c390b53d638c1" Nov 29 08:17:22 crc kubenswrapper[4731]: I1129 08:17:22.469490 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8ww9d"] Nov 29 08:17:22 crc kubenswrapper[4731]: I1129 08:17:22.978049 4731 scope.go:117] "RemoveContainer" containerID="f08c9dd9b5ed63769aff51927b41f7fdc1de8b332fd29db2cfa5eac0accb7c52" Nov 29 08:17:23 crc kubenswrapper[4731]: I1129 08:17:23.820294 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" path="/var/lib/kubelet/pods/1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7/volumes" Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.002855 4731 patch_prober.go:28] interesting pod/machine-config-daemon-rscr8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.003661 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.003747 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.004882 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed"} pod="openshift-machine-config-operator/machine-config-daemon-rscr8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.004973 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerName="machine-config-daemon" containerID="cri-o://f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" gracePeriod=600 Nov 29 08:17:33 crc kubenswrapper[4731]: E1129 08:17:33.139920 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.497807 4731 generic.go:334] "Generic (PLEG): container finished" podID="2302dbb7-38db-4752-a5d0-2d055da3aec3" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" exitCode=0 Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.497865 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" event={"ID":"2302dbb7-38db-4752-a5d0-2d055da3aec3","Type":"ContainerDied","Data":"f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed"} Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.497910 4731 scope.go:117] "RemoveContainer" containerID="607d6adc71fd03ad8796c2f2c18f0bffcc7e369862c2d387eb5552ab82f9242f" Nov 29 08:17:33 crc kubenswrapper[4731]: I1129 08:17:33.498744 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:17:33 crc kubenswrapper[4731]: E1129 08:17:33.499152 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:17:46 crc kubenswrapper[4731]: I1129 08:17:46.809329 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:17:46 crc kubenswrapper[4731]: E1129 08:17:46.810278 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:18:01 crc kubenswrapper[4731]: I1129 08:18:01.819243 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:18:01 crc kubenswrapper[4731]: E1129 08:18:01.820229 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:18:14 crc kubenswrapper[4731]: I1129 08:18:14.807373 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:18:14 crc kubenswrapper[4731]: E1129 08:18:14.808082 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:18:23 crc kubenswrapper[4731]: I1129 08:18:23.062006 4731 scope.go:117] "RemoveContainer" containerID="5dd956e8eeb2eb805c098e2a4147b222085641877b13844357668b3b339ef735" Nov 29 08:18:29 crc kubenswrapper[4731]: I1129 08:18:29.807268 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:18:29 crc kubenswrapper[4731]: E1129 08:18:29.808072 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:18:42 crc kubenswrapper[4731]: I1129 08:18:42.807229 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:18:42 crc kubenswrapper[4731]: E1129 08:18:42.808107 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.876501 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s45tl"] Nov 29 08:18:48 crc kubenswrapper[4731]: E1129 08:18:48.877553 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerName="registry-server" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.877594 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerName="registry-server" Nov 29 08:18:48 crc kubenswrapper[4731]: E1129 08:18:48.877615 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerName="extract-content" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.877623 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerName="extract-content" Nov 29 08:18:48 crc kubenswrapper[4731]: E1129 08:18:48.877632 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerName="extract-utilities" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.877639 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerName="extract-utilities" Nov 29 08:18:48 crc kubenswrapper[4731]: E1129 08:18:48.877648 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerName="extract-content" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.877654 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerName="extract-content" Nov 29 08:18:48 crc kubenswrapper[4731]: E1129 08:18:48.877664 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerName="extract-utilities" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.877669 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerName="extract-utilities" Nov 29 08:18:48 crc kubenswrapper[4731]: E1129 08:18:48.877695 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerName="registry-server" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.877702 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerName="registry-server" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.877889 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9f200f-2f8d-48b3-abd0-e49b8d08b8a7" containerName="registry-server" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.877904 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f3a082-ceaf-4422-8285-bc0670f9fa70" containerName="registry-server" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.879359 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:48 crc kubenswrapper[4731]: I1129 08:18:48.891359 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s45tl"] Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.054475 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-catalog-content\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.054604 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-utilities\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.055107 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bm8z\" (UniqueName: \"kubernetes.io/projected/4275145c-a51d-4720-a53c-856409e2474d-kube-api-access-5bm8z\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.156611 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bm8z\" (UniqueName: \"kubernetes.io/projected/4275145c-a51d-4720-a53c-856409e2474d-kube-api-access-5bm8z\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.156682 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-catalog-content\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.156735 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-utilities\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.157301 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-utilities\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.157791 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-catalog-content\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.178399 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bm8z\" (UniqueName: \"kubernetes.io/projected/4275145c-a51d-4720-a53c-856409e2474d-kube-api-access-5bm8z\") pod \"redhat-marketplace-s45tl\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.214671 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:49 crc kubenswrapper[4731]: I1129 08:18:49.697638 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s45tl"] Nov 29 08:18:50 crc kubenswrapper[4731]: I1129 08:18:50.307418 4731 generic.go:334] "Generic (PLEG): container finished" podID="4275145c-a51d-4720-a53c-856409e2474d" containerID="1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f" exitCode=0 Nov 29 08:18:50 crc kubenswrapper[4731]: I1129 08:18:50.307621 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s45tl" event={"ID":"4275145c-a51d-4720-a53c-856409e2474d","Type":"ContainerDied","Data":"1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f"} Nov 29 08:18:50 crc kubenswrapper[4731]: I1129 08:18:50.307978 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s45tl" event={"ID":"4275145c-a51d-4720-a53c-856409e2474d","Type":"ContainerStarted","Data":"3d46673eea4ac272818c7a82a07fe403492df32e6cb006dbed74b66916959a70"} Nov 29 08:18:51 crc kubenswrapper[4731]: I1129 08:18:51.319337 4731 generic.go:334] "Generic (PLEG): container finished" podID="4275145c-a51d-4720-a53c-856409e2474d" containerID="807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f" exitCode=0 Nov 29 08:18:51 crc kubenswrapper[4731]: I1129 08:18:51.319435 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s45tl" event={"ID":"4275145c-a51d-4720-a53c-856409e2474d","Type":"ContainerDied","Data":"807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f"} Nov 29 08:18:52 crc kubenswrapper[4731]: I1129 08:18:52.330384 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s45tl" event={"ID":"4275145c-a51d-4720-a53c-856409e2474d","Type":"ContainerStarted","Data":"96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40"} Nov 29 08:18:52 crc kubenswrapper[4731]: I1129 08:18:52.346753 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s45tl" podStartSLOduration=2.90737368 podStartE2EDuration="4.346725789s" podCreationTimestamp="2025-11-29 08:18:48 +0000 UTC" firstStartedPulling="2025-11-29 08:18:50.309747635 +0000 UTC m=+4369.200108748" lastFinishedPulling="2025-11-29 08:18:51.749099754 +0000 UTC m=+4370.639460857" observedRunningTime="2025-11-29 08:18:52.344106804 +0000 UTC m=+4371.234467907" watchObservedRunningTime="2025-11-29 08:18:52.346725789 +0000 UTC m=+4371.237086892" Nov 29 08:18:53 crc kubenswrapper[4731]: I1129 08:18:53.807361 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:18:53 crc kubenswrapper[4731]: E1129 08:18:53.807721 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:18:59 crc kubenswrapper[4731]: I1129 08:18:59.215776 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:59 crc kubenswrapper[4731]: I1129 08:18:59.216314 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:59 crc kubenswrapper[4731]: I1129 08:18:59.265711 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:59 crc kubenswrapper[4731]: I1129 08:18:59.471674 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:18:59 crc kubenswrapper[4731]: I1129 08:18:59.537462 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s45tl"] Nov 29 08:19:01 crc kubenswrapper[4731]: I1129 08:19:01.444693 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s45tl" podUID="4275145c-a51d-4720-a53c-856409e2474d" containerName="registry-server" containerID="cri-o://96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40" gracePeriod=2 Nov 29 08:19:01 crc kubenswrapper[4731]: I1129 08:19:01.908601 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.033302 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-catalog-content\") pod \"4275145c-a51d-4720-a53c-856409e2474d\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.033773 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bm8z\" (UniqueName: \"kubernetes.io/projected/4275145c-a51d-4720-a53c-856409e2474d-kube-api-access-5bm8z\") pod \"4275145c-a51d-4720-a53c-856409e2474d\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.034003 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-utilities\") pod \"4275145c-a51d-4720-a53c-856409e2474d\" (UID: \"4275145c-a51d-4720-a53c-856409e2474d\") " Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.034910 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-utilities" (OuterVolumeSpecName: "utilities") pod "4275145c-a51d-4720-a53c-856409e2474d" (UID: "4275145c-a51d-4720-a53c-856409e2474d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.042479 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4275145c-a51d-4720-a53c-856409e2474d-kube-api-access-5bm8z" (OuterVolumeSpecName: "kube-api-access-5bm8z") pod "4275145c-a51d-4720-a53c-856409e2474d" (UID: "4275145c-a51d-4720-a53c-856409e2474d"). InnerVolumeSpecName "kube-api-access-5bm8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.052410 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4275145c-a51d-4720-a53c-856409e2474d" (UID: "4275145c-a51d-4720-a53c-856409e2474d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.136016 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.136050 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bm8z\" (UniqueName: \"kubernetes.io/projected/4275145c-a51d-4720-a53c-856409e2474d-kube-api-access-5bm8z\") on node \"crc\" DevicePath \"\"" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.136062 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4275145c-a51d-4720-a53c-856409e2474d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.456801 4731 generic.go:334] "Generic (PLEG): container finished" podID="4275145c-a51d-4720-a53c-856409e2474d" containerID="96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40" exitCode=0 Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.456855 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s45tl" event={"ID":"4275145c-a51d-4720-a53c-856409e2474d","Type":"ContainerDied","Data":"96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40"} Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.456887 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s45tl" event={"ID":"4275145c-a51d-4720-a53c-856409e2474d","Type":"ContainerDied","Data":"3d46673eea4ac272818c7a82a07fe403492df32e6cb006dbed74b66916959a70"} Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.456905 4731 scope.go:117] "RemoveContainer" containerID="96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.456919 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s45tl" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.498096 4731 scope.go:117] "RemoveContainer" containerID="807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.503889 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s45tl"] Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.513890 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s45tl"] Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.518727 4731 scope.go:117] "RemoveContainer" containerID="1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.568192 4731 scope.go:117] "RemoveContainer" containerID="96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40" Nov 29 08:19:02 crc kubenswrapper[4731]: E1129 08:19:02.568792 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40\": container with ID starting with 96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40 not found: ID does not exist" containerID="96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.568843 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40"} err="failed to get container status \"96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40\": rpc error: code = NotFound desc = could not find container \"96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40\": container with ID starting with 96368ab4f339639e486462a06692a9434d8864d80ecc09a40cff7e2f90d9ae40 not found: ID does not exist" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.568876 4731 scope.go:117] "RemoveContainer" containerID="807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f" Nov 29 08:19:02 crc kubenswrapper[4731]: E1129 08:19:02.569288 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f\": container with ID starting with 807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f not found: ID does not exist" containerID="807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.569347 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f"} err="failed to get container status \"807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f\": rpc error: code = NotFound desc = could not find container \"807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f\": container with ID starting with 807d6bca8708802c41ef2d2310ef996fabad630966994b11f32905d0fb9f186f not found: ID does not exist" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.569382 4731 scope.go:117] "RemoveContainer" containerID="1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f" Nov 29 08:19:02 crc kubenswrapper[4731]: E1129 08:19:02.569786 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f\": container with ID starting with 1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f not found: ID does not exist" containerID="1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f" Nov 29 08:19:02 crc kubenswrapper[4731]: I1129 08:19:02.569814 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f"} err="failed to get container status \"1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f\": rpc error: code = NotFound desc = could not find container \"1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f\": container with ID starting with 1891a677dbaee92b7d4a8bde2861b2c05b887b2874ffb033c4e5f1c5cd0e516f not found: ID does not exist" Nov 29 08:19:03 crc kubenswrapper[4731]: I1129 08:19:03.818439 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4275145c-a51d-4720-a53c-856409e2474d" path="/var/lib/kubelet/pods/4275145c-a51d-4720-a53c-856409e2474d/volumes" Nov 29 08:19:06 crc kubenswrapper[4731]: I1129 08:19:06.806418 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:19:06 crc kubenswrapper[4731]: E1129 08:19:06.806938 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:19:18 crc kubenswrapper[4731]: I1129 08:19:18.807378 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:19:18 crc kubenswrapper[4731]: E1129 08:19:18.808223 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:19:33 crc kubenswrapper[4731]: I1129 08:19:33.807409 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:19:33 crc kubenswrapper[4731]: E1129 08:19:33.808400 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:19:47 crc kubenswrapper[4731]: I1129 08:19:47.807085 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:19:47 crc kubenswrapper[4731]: E1129 08:19:47.807819 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:19:58 crc kubenswrapper[4731]: I1129 08:19:58.807155 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:19:58 crc kubenswrapper[4731]: E1129 08:19:58.808036 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:20:12 crc kubenswrapper[4731]: I1129 08:20:12.806981 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:20:12 crc kubenswrapper[4731]: E1129 08:20:12.808151 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:20:27 crc kubenswrapper[4731]: I1129 08:20:27.807977 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:20:27 crc kubenswrapper[4731]: E1129 08:20:27.809000 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:20:41 crc kubenswrapper[4731]: I1129 08:20:41.824001 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:20:41 crc kubenswrapper[4731]: E1129 08:20:41.824931 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:20:53 crc kubenswrapper[4731]: I1129 08:20:53.807165 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:20:53 crc kubenswrapper[4731]: E1129 08:20:53.807976 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:21:04 crc kubenswrapper[4731]: I1129 08:21:04.807275 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:21:04 crc kubenswrapper[4731]: E1129 08:21:04.808032 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:21:19 crc kubenswrapper[4731]: I1129 08:21:19.807751 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:21:19 crc kubenswrapper[4731]: E1129 08:21:19.808858 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3" Nov 29 08:21:32 crc kubenswrapper[4731]: I1129 08:21:32.807932 4731 scope.go:117] "RemoveContainer" containerID="f5754b15ed83166749f0830c20827dcfa637c0959aa7031c331044ed01c74bed" Nov 29 08:21:32 crc kubenswrapper[4731]: E1129 08:21:32.808913 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rscr8_openshift-machine-config-operator(2302dbb7-38db-4752-a5d0-2d055da3aec3)\"" pod="openshift-machine-config-operator/machine-config-daemon-rscr8" podUID="2302dbb7-38db-4752-a5d0-2d055da3aec3"